feat: Add production and development Docker Compose configurations, health check endpoint, and update documentation

This commit is contained in:
2025-10-27 20:57:36 +01:00
parent a6a5f630cc
commit dcb08ab1b8
7 changed files with 318 additions and 4 deletions

View File

@@ -4,6 +4,13 @@ This document contains the expanded development, usage, testing, and migration g
## Development
### Prerequisites
- Python 3.10+
- Node.js 20+ (for Playwright-driven E2E tests)
- Docker (optional, required for containerized workflows)
- Git
To get started locally:
```powershell
@@ -47,6 +54,99 @@ docker run --rm -p 8000:8000 ^
If you maintain a Postgres or Redis dependency locally, consider authoring a `docker compose` stack that pairs them with the app container. The Docker image expects the database to be reachable and migrations executed before serving traffic.
### Compose-driven development stack
The repository ships with `docker-compose.dev.yml`, wiring the API and database into a single development stack. It defaults to the Debian-based `postgres:16` image so UTF-8 locales are available without additional tooling and mounts persistent data in the `pg_data_dev` volume.
Typical workflow (run from the repository root):
```powershell
# Build images and ensure dependencies are cached
docker compose -f docker-compose.dev.yml build
# Start FastAPI and Postgres in the background
docker compose -f docker-compose.dev.yml up -d
# Tail logs for both services
docker compose -f docker-compose.dev.yml logs -f
# Stop services but keep the database volume for reuse
docker compose -f docker-compose.dev.yml down
# Remove the persistent Postgres volume when you need a clean slate
docker volume rm calminer_pg_data_dev # optional; confirm exact name with `docker volume ls`
```
Environment variables used by the containers live directly in the compose file (`DATABASE_HOST=db`, `DATABASE_NAME=calminer_dev`, etc.), so no extra `.env` file is required. Adjust or override them via `docker compose ... -e VAR=value` if necessary.
For a deeper walkthrough (including volume naming conventions, port mappings, and how the stack fits into the broader architecture), cross-check `docs/architecture/15_development_setup.md`. That chapter mirrors the compose defaults captured here so both documents stay in sync.
### Compose-driven test stack
Use `docker-compose.test.yml` to spin up a Postgres 16 container and execute the Python test suite in a disposable worker container:
```powershell
# Build images used by the test workflow
docker compose -f docker-compose.test.yml build
# Run the default target (unit tests)
docker compose -f docker-compose.test.yml run --rm tests
# Run a specific target (e.g., full suite)
docker compose -f docker-compose.test.yml run --rm -e PYTEST_TARGET=tests tests
# Tear everything down and drop the test database volume
docker compose -f docker-compose.test.yml down -v
```
The `tests` service prepares the database via `scripts/setup_database.py` before invoking pytest, ensuring migrations and seed data mirror CI behaviour. Named volumes (`pip_cache_test`, `pg_data_test`) cache dependencies and data between runs; remove them with `down -v` whenever you want a pristine environment. An `api` service is available on `http://localhost:8001` for spot-checking API responses against the same test database.
### Compose-driven production stack
Use `docker-compose.prod.yml` for operator-managed deployments. The file defines:
- `api`: FastAPI container with configurable CPU/memory limits and a `/health` probe.
- `traefik`: Optional (enable with the `reverse-proxy` profile) to terminate TLS and route traffic based on `CALMINER_DOMAIN`.
- `postgres`: Optional (enable with the `local-db` profile) when a managed database is unavailable; persists data in `pg_data_prod` and mounts `./backups`.
Commands (run from the repository root):
```powershell
# Prepare environment variables once per environment
copy config\setup_production.env.example config\setup_production.env
# Start API behind Traefik
docker compose ^
--env-file config/setup_production.env ^
-f docker-compose.prod.yml ^
--profile reverse-proxy ^
up -d
# Add the local Postgres profile when running without managed DB
docker compose ^
--env-file config/setup_production.env ^
-f docker-compose.prod.yml ^
--profile reverse-proxy --profile local-db ^
up -d
# Apply migrations/seed data
docker compose ^
--env-file config/setup_production.env ^
-f docker-compose.prod.yml ^
run --rm api ^
python scripts/setup_database.py --run-migrations --seed-data
# Check health (FastAPI exposes /health)
docker compose -f docker-compose.prod.yml ps
# Stop services (volumes persist unless -v is supplied)
docker compose -f docker-compose.prod.yml down
```
Key environment variables (documented in `config/setup_production.env.example`): container image tag, domain/ACME email, published ports, network name, and resource limits (`API_LIMIT_CPUS`, `API_LIMIT_MEMORY`, etc.).
For deployment topology diagrams and operational sequencing, see [docs/architecture/07_deployment_view.md](architecture/07_deployment_view.md#production-docker-compose-topology).
## Usage Overview
- **API base URL**: `http://localhost:8000/api`
@@ -98,7 +198,7 @@ python scripts/setup_database.py --run-migrations --seed-data
The dry-run invocation reports which steps would execute without making changes. The live run applies the baseline (if not already recorded in `schema_migrations`) and seeds the reference data relied upon by the UI and API.
> When `--seed-data` is supplied without `--run-migrations`, the bootstrap script automatically applies any pending SQL migrations first so the `application_setting` table (and future settings-backed features) are present before seeding.
>
> The application still accepts `DATABASE_URL` as a fallback if the granular variables are not set.
## Database bootstrap workflow