feat: add continuous integration and deployment documentation, including CI stages, local testing, and Kubernetes deployment guidelines

This commit is contained in:
2025-11-12 12:04:25 +01:00
parent 29f16139a3
commit c9ac7195c7
3 changed files with 316 additions and 42 deletions

View File

@@ -27,22 +27,45 @@ Before you begin, ensure that you have the following prerequisites installed on
cd calminer
```
2. **Build and Start the Docker Containers**
2. **Environment Configuration**
Copy the appropriate environment file for your deployment:
```bash
# For development
cp .env.development .env
# For staging
cp .env.staging .env
# For production
cp .env.production .env
# Then edit .env with your actual database credentials
```
3. **Build and Start the Docker Containers**
Run the following command to build and start the Docker containers:
```bash
# For development (includes live reload and source mounting)
docker compose up --build
# For staging
docker compose -f docker-compose.yml -f docker-compose.staging.yml up --build
# For production
docker compose -f docker-compose.yml -f docker-compose.prod.yml up --build
```
This command will build the Docker images and start the containers as defined in the `docker-compose.yml` file.
3. **Access the Application**
4. **Access the Application**
Once the containers are up and running, you can access the Calminer application by navigating to `http://localhost:8003` in your web browser.
If you are running the application on a remote server, replace `localhost` with the server's IP address or domain name.
4. **Database Initialization**
5. **Database Initialization**
The application container executes `/app/scripts/docker-entrypoint.sh` before launching the API. This entrypoint runs `python -m scripts.run_migrations`, which applies all Alembic migrations and keeps the schema current on every startup. No additional action is required when using Docker Compose, but you can review the logs to confirm the migrations completed successfully.
@@ -55,7 +78,7 @@ Before you begin, ensure that you have the following prerequisites installed on
The script is idempotent; it will only apply pending migrations.
5. **Seed Default Accounts and Roles**
6. **Seed Default Accounts and Roles**
After the schema is in place, run the initial data seeding utility so the default roles and administrator account exist:
@@ -66,7 +89,7 @@ Before you begin, ensure that you have the following prerequisites installed on
The script reads the standard database environment variables (see below) and supports the following overrides:
- `CALMINER_SEED_ADMIN_EMAIL` (default `admin@calminer.local`)
- `CALMINER_SEED_ADMIN_EMAIL` (default `admin@calminer.local` for dev, `admin@calminer.com` for prod)
- `CALMINER_SEED_ADMIN_USERNAME` (default `admin`)
- `CALMINER_SEED_ADMIN_PASSWORD` (default `ChangeMe123!` — change in production)
- `CALMINER_SEED_ADMIN_ROLES` (comma list, always includes `admin`)
@@ -99,46 +122,26 @@ The `docker-compose.yml` file contains the configuration for the Calminer applic
### Environment Variables
The application uses environment variables to configure various settings. You can set these variables in a `.env` file in the root directory of the project. Refer to the `docker-compose.yml` file for a list of available environment variables and their default values.
The application uses environment variables to configure various settings. You can set these variables in a `.env` file in the root directory of the project. Refer to the provided `.env.*` files for examples and default values.
Key variables relevant to import/export workflows:
| Variable | Default | Description |
| ----------------------------- | --------- | ------------------------------------------------------------------------------- |
| `CALMINER_EXPORT_MAX_ROWS` | _(unset)_ | Optional safety guard to limit the number of rows exported in a single request. |
| `CALMINER_EXPORT_METADATA` | `true` | Controls whether metadata sheets are generated by default during Excel exports. |
| `CALMINER_IMPORT_STAGING_TTL` | `300` | Controls how long staged import tokens remain valid before expiration. |
| `CALMINER_IMPORT_MAX_ROWS` | _(unset)_ | Optional guard to prevent excessively large import files. |
| Variable | Development | Staging | Production | Description |
| ----------------------------- | ----------- | ------- | ---------- | ------------------------------------------------------------------------------- |
| `CALMINER_EXPORT_MAX_ROWS` | 1000 | 50000 | 100000 | Optional safety guard to limit the number of rows exported in a single request. |
| `CALMINER_EXPORT_METADATA` | `true` | `true` | `true` | Controls whether metadata sheets are generated by default during Excel exports. |
| `CALMINER_IMPORT_STAGING_TTL` | `300` | `600` | `3600` | Controls how long staged import tokens remain valid before expiration. |
| `CALMINER_IMPORT_MAX_ROWS` | `10000` | `50000` | `100000` | Optional guard to prevent excessively large import files. |
### Running Export Workflows Locally
### Docker Environment Parity
1. Activate your virtual environment and ensure dependencies are installed:
The Docker Compose configurations ensure environment parity across development, staging, and production:
```bash
pip install -r requirements.txt
```
- **Development**: Uses `docker-compose.override.yml` with live code reloading, debug logging, and relaxed resource limits.
- **Staging**: Uses `docker-compose.staging.yml` with health checks, moderate resource limits, and staging-specific configurations.
- **Production**: Uses `docker-compose.prod.yml` with strict resource limits, production logging, and required external database configuration.
2. Start the FastAPI application (or use `docker compose up`).
3. Use the `/exports/projects` or `/exports/scenarios` endpoints to request CSV/XLSX downloads:
```bash
curl -X POST http://localhost:8000/exports/projects \
-H "Content-Type: application/json" \
-d '{"format": "csv"}' --output projects.csv
curl -X POST http://localhost:8000/exports/projects \
-H "Content-Type: application/json" \
-d '{"format": "xlsx"}' --output projects.xlsx
```
4. The Prometheus metrics endpoint is available at `/metrics` once the app is running. Ensure your monitoring stack scrapes it (e.g., Prometheus target `localhost:8000`).
5. For automated verification in CI pipelines, invoke the dedicated pytest module:
```bash
pytest tests/test_export_routes.py
```
All environments use the same base `docker-compose.yml` and share common environment variables for consistency.
### Volumes
@@ -149,15 +152,105 @@ The application uses Docker volumes to persist data. The following volumes are d
Ensure that these volumes are properly configured to avoid data loss during container restarts or removals.
## Stopping the Application
## Kubernetes Deployment
To stop the application, run the following command in the terminal:
For production deployments, Calminer can be deployed on a Kubernetes cluster using the provided manifests in the `k8s/` directory.
### K8s Prerequisites
- Kubernetes cluster (e.g., minikube for local testing, or cloud provider like GKE, EKS)
- kubectl configured to access the cluster
- Helm (optional, for advanced deployments)
### K8s Deployment Steps
1. **Clone the Repository and Build Image**
```bash
git clone https://git.allucanget.biz/allucanget/calminer.git
cd calminer
docker build -t registry.example.com/calminer:latest .
docker push registry.example.com/calminer:latest
```
2. **Update Manifests**
Edit the manifests in `k8s/` to match your environment:
- Update image registry in `deployment.yaml`
- Update host in `ingress.yaml`
- Update secrets in `secret.yaml` with base64 encoded values
3. **Deploy to Kubernetes**
```bash
kubectl apply -f k8s/
```
4. **Verify Deployment**
```bash
kubectl get pods
kubectl get services
kubectl get ingress
```
5. **Access the Application**
The application will be available at the ingress host (e.g., `https://calminer.example.com`).
### Environment Parity
The Kubernetes deployment uses the same environment variables as Docker Compose, ensuring consistency across environments. Secrets are managed via Kubernetes Secrets, and configurations via ConfigMaps.
### Scaling
The deployment is configured with 3 replicas for high availability. You can scale as needed:
```bash
docker compose down
kubectl scale deployment calminer-app --replicas=5
```
This command will stop and remove the containers, networks, and volumes created by Docker Compose.
### Monitoring
Ensure your monitoring stack (e.g., Prometheus) scrapes the `/metrics` endpoint from the service.
## CI/CD Pipeline
Calminer uses Gitea Actions for continuous integration and deployment. The CI/CD pipeline is defined in `.gitea/workflows/cicache.yml` and includes the following stages:
### CI Stages
1. **Lint**: Runs Ruff for Python linting, Black for code formatting, and Bandit for security scanning.
2. **Test**: Executes the full pytest suite with coverage reporting (80% minimum), using a PostgreSQL service container.
3. **Build**: Builds Docker images using Buildx and pushes to the Gitea registry on the main branch.
### CD Stages
1. **Deploy**: Deploys to staging or production Kubernetes clusters based on commit messages containing `[deploy staging]` or `[deploy production]`.
### Required Secrets
The following secrets must be configured in your Gitea repository:
- `REGISTRY_URL`: Gitea registry URL
- `REGISTRY_USERNAME`: Registry username
- `REGISTRY_PASSWORD`: Registry password
- `STAGING_KUBE_CONFIG`: Base64-encoded kubeconfig for staging cluster
- `PROD_KUBE_CONFIG`: Base64-encoded kubeconfig for production cluster
### Deployment Triggers
- **Automatic**: Images are built and pushed on every push to `main` or `develop` branches.
- **Staging Deployment**: Include `[deploy staging]` in your commit message to trigger staging deployment.
- **Production Deployment**: Include `[deploy production]` in your commit message to trigger production deployment.
### Monitoring CI/CD
- View pipeline status in the Gitea Actions tab.
- Test artifacts (coverage, pytest reports) are uploaded for each run.
- Docker build logs are available for troubleshooting build failures.
- Deployment runs publish Kubernetes rollout diagnostics under the `deployment-logs` artifact (`/logs/deployment/`), which includes pod listings, deployment manifests, and recent container logs.
## Troubleshooting