4 Commits

Author SHA1 Message Date
723f6a62b8 feat: Enhance CI workflows with health checks and update PostgreSQL image version
Some checks failed
Run Tests / e2e tests (push) Successful in 1m33s
Run Tests / lint tests (push) Failing after 2s
Run Tests / unit tests (push) Failing after 2s
2025-10-27 21:12:46 +01:00
dcb08ab1b8 feat: Add production and development Docker Compose configurations, health check endpoint, and update documentation 2025-10-27 20:57:36 +01:00
a6a5f630cc feat: Add initial Docker Compose configuration for API service 2025-10-27 19:46:35 +01:00
b56045ca6a feat: Add Docker Compose configuration for testing and API services 2025-10-27 19:44:43 +01:00
12 changed files with 460 additions and 6 deletions

View File

@@ -53,6 +53,8 @@ jobs:
- name: Set up QEMU and Buildx
uses: docker/setup-buildx-action@v3
with:
install: false
- name: Log in to Gitea registry
if: ${{ steps.meta.outputs.on_default == 'true' }}
@@ -72,3 +74,5 @@ jobs:
tags: |
${{ env.REGISTRY_URL }}/${{ env.REGISTRY_ORG }}/${{ env.REGISTRY_IMAGE_NAME }}:latest
${{ env.REGISTRY_URL }}/${{ env.REGISTRY_ORG }}/${{ env.REGISTRY_IMAGE_NAME }}:${{ steps.meta.outputs.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@@ -49,3 +49,16 @@ jobs:
-e DATABASE_NAME=${{ secrets.DATABASE_NAME }} \
-e DATABASE_SCHEMA=${{ secrets.DATABASE_SCHEMA }} \
"$IMAGE_PATH:$IMAGE_SHA"
for attempt in {1..10}; do
if curl -fsS http://localhost:8000/health >/dev/null; then
echo "Deployment health check passed"
exit 0
fi
echo "Health check attempt ${attempt} failed; retrying in 3s"
sleep 3
done
echo "Deployment health check failed after retries" >&2
docker logs calminer >&2 || true
exit 1

View File

@@ -24,7 +24,7 @@ jobs:
target: [unit, e2e, lint]
services:
postgres:
image: postgres:16-alpine
image: postgres:16
env:
POSTGRES_DB: calminer_ci
POSTGRES_USER: calminer
@@ -36,6 +36,7 @@ jobs:
--health-retries 10
steps:
- name: Install Node.js runtime
if: ${{ matrix.target == 'e2e' }}
shell: bash
run: |
set -euo pipefail
@@ -63,8 +64,9 @@ jobs:
- name: Prepare Python environment
uses: ./.gitea/actions/setup-python-env
with:
install-playwright: ${{ matrix.target != 'e2e' }}
install-playwright: ${{ matrix.target == 'e2e' }}
use-system-python: 'true'
run-db-setup: ${{ matrix.target != 'lint' }}
- name: Run tests
run: |

View File

@@ -78,7 +78,19 @@ docker run --rm -p 8000:8000 ^
### Orchestrated Deployment
Use `docker compose` or an orchestrator of your choice to co-locate PostgreSQL/Redis alongside the app when needed. The image expects migrations to be applied before startup.
Use `docker compose` or an orchestrator of your choice to co-locate PostgreSQL/Redis/Traefik alongside the app when needed. The image expects migrations to be applied before startup.
### Production docker-compose workflow
`docker-compose.prod.yml` covers the API plus optional Traefik (`reverse-proxy` profile) and on-host Postgres (`local-db` profile). Commands, health checks, and environment variables are documented in [docs/quickstart.md](docs/quickstart.md#compose-driven-production-stack) and expanded in [docs/architecture/07_deployment_view.md](docs/architecture/07_deployment_view.md).
### Development docker-compose workflow
`docker-compose.dev.yml` runs FastAPI (with reload) and Postgres in a single stack. See [docs/quickstart.md](docs/quickstart.md#compose-driven-development-stack) for lifecycle commands and troubleshooting, plus the architecture chapter ([docs/architecture/15_development_setup.md](docs/architecture/15_development_setup.md)) for deeper context.
### Test docker-compose workflow
`docker-compose.test.yml` mirrors the CI pipeline: it provisions Postgres, runs the database bootstrap script, and executes pytest. Usage examples live in [docs/quickstart.md](docs/quickstart.md#compose-driven-test-stack).
## CI/CD expectations

0
backups/.gitkeep Normal file
View File

View File

@@ -0,0 +1,35 @@
# Copy this file to config/setup_production.env and replace values with production secrets
# Container image and runtime configuration
CALMINER_IMAGE=registry.example.com/calminer/api:latest
CALMINER_DOMAIN=calminer.example.com
TRAEFIK_ACME_EMAIL=ops@example.com
CALMINER_API_PORT=8000
UVICORN_WORKERS=4
UVICORN_LOG_LEVEL=info
CALMINER_NETWORK=calminer_backend
API_LIMIT_CPUS=1.0
API_LIMIT_MEMORY=1g
API_RESERVATION_MEMORY=512m
TRAEFIK_LIMIT_CPUS=0.5
TRAEFIK_LIMIT_MEMORY=512m
POSTGRES_LIMIT_CPUS=1.0
POSTGRES_LIMIT_MEMORY=2g
POSTGRES_RESERVATION_MEMORY=1g
# Application database connection
DATABASE_DRIVER=postgresql+psycopg2
DATABASE_HOST=production-db.internal
DATABASE_PORT=5432
DATABASE_NAME=calminer
DATABASE_USER=calminer_app
DATABASE_PASSWORD=ChangeMe123!
DATABASE_SCHEMA=public
# Optional consolidated SQLAlchemy URL (overrides granular settings when set)
# DATABASE_URL=postgresql+psycopg2://calminer_app:ChangeMe123!@production-db.internal:5432/calminer
# Superuser credentials used by scripts/setup_database.py for migrations/seed data
DATABASE_SUPERUSER=postgres
DATABASE_SUPERUSER_PASSWORD=ChangeMeSuper123!
DATABASE_SUPERUSER_DB=postgres

130
docker-compose.prod.yml Normal file
View File

@@ -0,0 +1,130 @@
services:
api:
image: ${CALMINER_IMAGE:-calminer-api:latest}
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
env_file:
- config/setup_production.env
environment:
UVICORN_WORKERS: ${UVICORN_WORKERS:-2}
UVICORN_LOG_LEVEL: ${UVICORN_LOG_LEVEL:-info}
command:
[
"sh",
"-c",
"uvicorn main:app --host 0.0.0.0 --port 8000 --workers ${UVICORN_WORKERS:-2} --log-level ${UVICORN_LOG_LEVEL:-info}",
]
ports:
- "${CALMINER_API_PORT:-8000}:8000"
deploy:
resources:
limits:
cpus: ${API_LIMIT_CPUS:-1.0}
memory: ${API_LIMIT_MEMORY:-1g}
reservations:
memory: ${API_RESERVATION_MEMORY:-512m}
healthcheck:
test:
- "CMD-SHELL"
- 'python -c "import urllib.request; urllib.request.urlopen(''http://127.0.0.1:8000/health'').read()"'
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
networks:
- calminer_backend
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
labels:
- "traefik.enable=true"
- "traefik.http.routers.calminer.rule=Host(`${CALMINER_DOMAIN}`)"
- "traefik.http.routers.calminer.entrypoints=websecure"
- "traefik.http.routers.calminer.tls.certresolver=letsencrypt"
- "traefik.http.services.calminer.loadbalancer.server.port=8000"
traefik:
image: traefik:v3.1
restart: unless-stopped
command:
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
- "--certificatesresolvers.letsencrypt.acme.email=${TRAEFIK_ACME_EMAIL:?TRAEFIK_ACME_EMAIL not set}"
- "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
ports:
- "80:80"
- "443:443"
deploy:
resources:
limits:
cpus: ${TRAEFIK_LIMIT_CPUS:-0.5}
memory: ${TRAEFIK_LIMIT_MEMORY:-512m}
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- traefik_letsencrypt:/letsencrypt
networks:
- calminer_backend
profiles:
- reverse-proxy
healthcheck:
test:
- "CMD"
- "traefik"
- "healthcheck"
- "--entrypoints=web"
- "--entrypoints=websecure"
interval: 30s
timeout: 10s
retries: 5
postgres:
image: postgres:16
profiles:
- local-db
restart: unless-stopped
environment:
POSTGRES_DB: ${POSTGRES_DB:-calminer}
POSTGRES_USER: ${POSTGRES_USER:-calminer}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-changeme}
LANG: en_US.UTF-8
LC_ALL: en_US.UTF-8
POSTGRES_INITDB_ARGS: --encoding=UTF8 --locale=en_US.UTF-8
ports:
- "${CALMINER_DB_PORT:-5432}:5432"
deploy:
resources:
limits:
cpus: ${POSTGRES_LIMIT_CPUS:-1.0}
memory: ${POSTGRES_LIMIT_MEMORY:-2g}
reservations:
memory: ${POSTGRES_RESERVATION_MEMORY:-1g}
volumes:
- pg_data_prod:/var/lib/postgresql/data
- ./backups:/backups
healthcheck:
test:
[
"CMD-SHELL",
"pg_isready -U ${POSTGRES_USER:-calminer} -d ${POSTGRES_DB:-calminer}",
]
interval: 30s
timeout: 10s
retries: 5
networks:
- calminer_backend
networks:
calminer_backend:
name: ${CALMINER_NETWORK:-calminer_backend}
driver: bridge
volumes:
pg_data_prod:
traefik_letsencrypt:

82
docker-compose.test.yml Normal file
View File

@@ -0,0 +1,82 @@
services:
tests:
build:
context: .
dockerfile: Dockerfile
command: >
sh -c "set -eu; pip install -r requirements-test.txt; python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v; python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v; pytest $${PYTEST_TARGET:-tests/unit}"
environment:
DATABASE_DRIVER: postgresql
DATABASE_HOST: postgres
DATABASE_PORT: 5432
DATABASE_NAME: calminer_test
DATABASE_USER: calminer_test
DATABASE_PASSWORD: calminer_test_password
DATABASE_SCHEMA: public
DATABASE_SUPERUSER: postgres
DATABASE_SUPERUSER_PASSWORD: postgres
DATABASE_SUPERUSER_DB: postgres
DATABASE_URL: postgresql+psycopg2://calminer_test:calminer_test_password@postgres:5432/calminer_test
PYTEST_TARGET: tests/unit
PYTHONPATH: /app
depends_on:
postgres:
condition: service_healthy
volumes:
- .:/app
- pip_cache_test:/root/.cache/pip
networks:
- calminer_test
api:
build:
context: .
dockerfile: Dockerfile
command: uvicorn main:app --host 0.0.0.0 --port 8000 --reload
environment:
DATABASE_DRIVER: postgresql
DATABASE_HOST: postgres
DATABASE_PORT: 5432
DATABASE_NAME: calminer_test
DATABASE_USER: calminer_test
DATABASE_PASSWORD: calminer_test_password
DATABASE_SCHEMA: public
DATABASE_URL: postgresql+psycopg2://calminer_test:calminer_test_password@postgres:5432/calminer_test
PYTHONPATH: /app
depends_on:
postgres:
condition: service_healthy
ports:
- "8001:8000"
networks:
- calminer_test
postgres:
image: postgres:16
restart: unless-stopped
environment:
POSTGRES_DB: calminer_test
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
LANG: en_US.UTF-8
LC_ALL: en_US.UTF-8
POSTGRES_INITDB_ARGS: --encoding=UTF8 --locale=en_US.UTF-8
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -d calminer_test"]
interval: 10s
timeout: 5s
retries: 5
ports:
- "5433:5432"
volumes:
- pg_data_test:/var/lib/postgresql/data
networks:
- calminer_test
networks:
calminer_test:
driver: bridge
volumes:
pg_data_test:
pip_cache_test:

39
docker-compose.yml Normal file
View File

@@ -0,0 +1,39 @@
services:
api:
image: ${CALMINER_IMAGE:-calminer-api:latest}
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
env_file:
- config/setup_production.env
environment:
UVICORN_WORKERS: ${UVICORN_WORKERS:-2}
UVICORN_LOG_LEVEL: ${UVICORN_LOG_LEVEL:-info}
command:
[
"sh",
"-c",
"uvicorn main:app --host 0.0.0.0 --port 8000 --workers ${UVICORN_WORKERS:-2} --log-level ${UVICORN_LOG_LEVEL:-info}",
]
ports:
- "${CALMINER_API_PORT:-8000}:8000"
healthcheck:
test:
- "CMD-SHELL"
- 'python -c "import urllib.request; urllib.request.urlopen(''http://127.0.0.1:8000/docs'').read()"'
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
networks:
- calminer_backend
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
networks:
calminer_backend:
driver: bridge

View File

@@ -1,6 +1,6 @@
---
title: "07 — Deployment View"
description: "Describe deployment topology, infrastructure components, and environments (dev/stage/prod)."
title: '07 — Deployment View'
description: 'Describe deployment topology, infrastructure components, and environments (dev/stage/prod).'
status: draft
---
@@ -85,6 +85,14 @@ The development environment is set up for local development and testing. It incl
- Local PostgreSQL instance (docker compose recommended, script available at `docker-compose.postgres.yml`)
- FastAPI server running in debug mode
`docker-compose.dev.yml` encapsulates this topology:
- `api` service mounts the repository for live reloads (`uvicorn --reload`) and depends on the database health check.
- `db` service uses the Debian-based `postgres:16` image with UTF-8 locale configuration and persists data in `pg_data_dev`.
- A shared `calminer_backend` bridge network keeps traffic contained; ports 8000/5432 are published for local tooling.
See [docs/quickstart.md](../quickstart.md#compose-driven-development-stack) for command examples and volume maintenance tips.
### Testing Environment
The testing environment is set up for automated testing and quality assurance. It includes:
@@ -93,6 +101,14 @@ The testing environment is set up for automated testing and quality assurance. I
- FastAPI server running in testing mode
- Automated test suite (e.g., pytest) for running unit and integration tests
`docker-compose.test.yml` provisions an ephemeral CI-like stack:
- `tests` service builds the application image, installs `requirements-test.txt`, runs the database setup script (dry-run + apply), then executes pytest.
- `api` service is available on port 8001 for manual verification against the test database.
- `postgres` service seeds a disposable Postgres 16 instance with health checks and named volumes (`pg_data_test`, `pip_cache_test`).
Typical commands mirror the CI workflow (`docker compose -f docker-compose.test.yml run --rm tests`); the [quickstart](../quickstart.md#compose-driven-test-stack) lists variations and teardown steps.
### Production Environment
The production environment is set up for serving live traffic and includes:
@@ -102,6 +118,22 @@ The production environment is set up for serving live traffic and includes:
- Load balancer (Traefik) for distributing incoming requests
- Monitoring and logging tools for tracking application performance
#### Production docker compose topology
- `docker-compose.prod.yml` defines the runtime topology for operator-managed deployments.
- `api` service runs the FastAPI image with resource limits (`API_LIMIT_CPUS`, `API_LIMIT_MEMORY`) and a `/health` probe consumed by Traefik and the Compose health check.
- `traefik` service (enabled via the `reverse-proxy` profile) terminates TLS using the ACME resolver configured by `TRAEFIK_ACME_EMAIL` and routes `CALMINER_DOMAIN` traffic to the API.
- `postgres` service (enabled via the `local-db` profile) exists for edge deployments without managed PostgreSQL and persists data in the `pg_data_prod` volume while mounting `./backups` for operator snapshots.
- All services join the configurable `CALMINER_NETWORK` (defaults to `calminer_backend`) to keep traffic isolated from host networks.
Deployment workflow:
1. Copy `config/setup_production.env.example` to `config/setup_production.env` and populate domain, registry image tag, database credentials, and resource budgets.
2. Launch the stack with `docker compose --env-file config/setup_production.env -f docker-compose.prod.yml --profile reverse-proxy up -d` (append `--profile local-db` when hosting Postgres locally).
3. Run database migrations and seeding using `docker compose --env-file config/setup_production.env -f docker-compose.prod.yml run --rm api python scripts/setup_database.py --run-migrations --seed-data`.
4. Monitor container health via `docker compose -f docker-compose.prod.yml ps` or Traefik dashboards; the API health endpoint returns `{ "status": "ok" }` when ready.
5. Shut down with `docker compose -f docker-compose.prod.yml down` (volumes persist unless `-v` is supplied).
## Containerized Deployment Flow
The Docker-based deployment path aligns with the solution strategy documented in [Solution Strategy](04_solution_strategy.md) and the CI practices captured in [Testing & CI](07_deployment/07_01_testing_ci.md.md).

View File

@@ -4,6 +4,13 @@ This document contains the expanded development, usage, testing, and migration g
## Development
### Prerequisites
- Python 3.10+
- Node.js 20+ (for Playwright-driven E2E tests)
- Docker (optional, required for containerized workflows)
- Git
To get started locally:
```powershell
@@ -47,6 +54,99 @@ docker run --rm -p 8000:8000 ^
If you maintain a Postgres or Redis dependency locally, consider authoring a `docker compose` stack that pairs them with the app container. The Docker image expects the database to be reachable and migrations executed before serving traffic.
### Compose-driven development stack
The repository ships with `docker-compose.dev.yml`, wiring the API and database into a single development stack. It defaults to the Debian-based `postgres:16` image so UTF-8 locales are available without additional tooling and mounts persistent data in the `pg_data_dev` volume.
Typical workflow (run from the repository root):
```powershell
# Build images and ensure dependencies are cached
docker compose -f docker-compose.dev.yml build
# Start FastAPI and Postgres in the background
docker compose -f docker-compose.dev.yml up -d
# Tail logs for both services
docker compose -f docker-compose.dev.yml logs -f
# Stop services but keep the database volume for reuse
docker compose -f docker-compose.dev.yml down
# Remove the persistent Postgres volume when you need a clean slate
docker volume rm calminer_pg_data_dev # optional; confirm exact name with `docker volume ls`
```
Environment variables used by the containers live directly in the compose file (`DATABASE_HOST=db`, `DATABASE_NAME=calminer_dev`, etc.), so no extra `.env` file is required. Adjust or override them via `docker compose ... -e VAR=value` if necessary.
For a deeper walkthrough (including volume naming conventions, port mappings, and how the stack fits into the broader architecture), cross-check `docs/architecture/15_development_setup.md`. That chapter mirrors the compose defaults captured here so both documents stay in sync.
### Compose-driven test stack
Use `docker-compose.test.yml` to spin up a Postgres 16 container and execute the Python test suite in a disposable worker container:
```powershell
# Build images used by the test workflow
docker compose -f docker-compose.test.yml build
# Run the default target (unit tests)
docker compose -f docker-compose.test.yml run --rm tests
# Run a specific target (e.g., full suite)
docker compose -f docker-compose.test.yml run --rm -e PYTEST_TARGET=tests tests
# Tear everything down and drop the test database volume
docker compose -f docker-compose.test.yml down -v
```
The `tests` service prepares the database via `scripts/setup_database.py` before invoking pytest, ensuring migrations and seed data mirror CI behaviour. Named volumes (`pip_cache_test`, `pg_data_test`) cache dependencies and data between runs; remove them with `down -v` whenever you want a pristine environment. An `api` service is available on `http://localhost:8001` for spot-checking API responses against the same test database.
### Compose-driven production stack
Use `docker-compose.prod.yml` for operator-managed deployments. The file defines:
- `api`: FastAPI container with configurable CPU/memory limits and a `/health` probe.
- `traefik`: Optional (enable with the `reverse-proxy` profile) to terminate TLS and route traffic based on `CALMINER_DOMAIN`.
- `postgres`: Optional (enable with the `local-db` profile) when a managed database is unavailable; persists data in `pg_data_prod` and mounts `./backups`.
Commands (run from the repository root):
```powershell
# Prepare environment variables once per environment
copy config\setup_production.env.example config\setup_production.env
# Start API behind Traefik
docker compose ^
--env-file config/setup_production.env ^
-f docker-compose.prod.yml ^
--profile reverse-proxy ^
up -d
# Add the local Postgres profile when running without managed DB
docker compose ^
--env-file config/setup_production.env ^
-f docker-compose.prod.yml ^
--profile reverse-proxy --profile local-db ^
up -d
# Apply migrations/seed data
docker compose ^
--env-file config/setup_production.env ^
-f docker-compose.prod.yml ^
run --rm api ^
python scripts/setup_database.py --run-migrations --seed-data
# Check health (FastAPI exposes /health)
docker compose -f docker-compose.prod.yml ps
# Stop services (volumes persist unless -v is supplied)
docker compose -f docker-compose.prod.yml down
```
Key environment variables (documented in `config/setup_production.env.example`): container image tag, domain/ACME email, published ports, network name, and resource limits (`API_LIMIT_CPUS`, `API_LIMIT_MEMORY`, etc.).
For deployment topology diagrams and operational sequencing, see [docs/architecture/07_deployment_view.md](architecture/07_deployment_view.md#production-docker-compose-topology).
## Usage Overview
- **API base URL**: `http://localhost:8000/api`
@@ -98,7 +198,7 @@ python scripts/setup_database.py --run-migrations --seed-data
The dry-run invocation reports which steps would execute without making changes. The live run applies the baseline (if not already recorded in `schema_migrations`) and seeds the reference data relied upon by the UI and API.
> When `--seed-data` is supplied without `--run-migrations`, the bootstrap script automatically applies any pending SQL migrations first so the `application_setting` table (and future settings-backed features) are present before seeding.
>
> The application still accepts `DATABASE_URL` as a fallback if the granular variables are not set.
## Database bootstrap workflow

View File

@@ -32,6 +32,11 @@ async def json_validation(
return await validate_json(request, call_next)
@app.get("/health", summary="Container health probe")
async def health() -> dict[str, str]:
return {"status": "ok"}
app.mount("/static", StaticFiles(directory="static"), name="static")
# Include API routers