feat: Add production and development Docker Compose configurations, health check endpoint, and update documentation
This commit is contained in:
@@ -1,6 +1,6 @@
|
||||
---
|
||||
title: "07 — Deployment View"
|
||||
description: "Describe deployment topology, infrastructure components, and environments (dev/stage/prod)."
|
||||
title: '07 — Deployment View'
|
||||
description: 'Describe deployment topology, infrastructure components, and environments (dev/stage/prod).'
|
||||
status: draft
|
||||
---
|
||||
|
||||
@@ -85,6 +85,14 @@ The development environment is set up for local development and testing. It incl
|
||||
- Local PostgreSQL instance (docker compose recommended, script available at `docker-compose.postgres.yml`)
|
||||
- FastAPI server running in debug mode
|
||||
|
||||
`docker-compose.dev.yml` encapsulates this topology:
|
||||
|
||||
- `api` service mounts the repository for live reloads (`uvicorn --reload`) and depends on the database health check.
|
||||
- `db` service uses the Debian-based `postgres:16` image with UTF-8 locale configuration and persists data in `pg_data_dev`.
|
||||
- A shared `calminer_backend` bridge network keeps traffic contained; ports 8000/5432 are published for local tooling.
|
||||
|
||||
See [docs/quickstart.md](../quickstart.md#compose-driven-development-stack) for command examples and volume maintenance tips.
|
||||
|
||||
### Testing Environment
|
||||
|
||||
The testing environment is set up for automated testing and quality assurance. It includes:
|
||||
@@ -93,6 +101,14 @@ The testing environment is set up for automated testing and quality assurance. I
|
||||
- FastAPI server running in testing mode
|
||||
- Automated test suite (e.g., pytest) for running unit and integration tests
|
||||
|
||||
`docker-compose.test.yml` provisions an ephemeral CI-like stack:
|
||||
|
||||
- `tests` service builds the application image, installs `requirements-test.txt`, runs the database setup script (dry-run + apply), then executes pytest.
|
||||
- `api` service is available on port 8001 for manual verification against the test database.
|
||||
- `postgres` service seeds a disposable Postgres 16 instance with health checks and named volumes (`pg_data_test`, `pip_cache_test`).
|
||||
|
||||
Typical commands mirror the CI workflow (`docker compose -f docker-compose.test.yml run --rm tests`); the [quickstart](../quickstart.md#compose-driven-test-stack) lists variations and teardown steps.
|
||||
|
||||
### Production Environment
|
||||
|
||||
The production environment is set up for serving live traffic and includes:
|
||||
@@ -102,6 +118,22 @@ The production environment is set up for serving live traffic and includes:
|
||||
- Load balancer (Traefik) for distributing incoming requests
|
||||
- Monitoring and logging tools for tracking application performance
|
||||
|
||||
#### Production docker compose topology
|
||||
|
||||
- `docker-compose.prod.yml` defines the runtime topology for operator-managed deployments.
|
||||
- `api` service runs the FastAPI image with resource limits (`API_LIMIT_CPUS`, `API_LIMIT_MEMORY`) and a `/health` probe consumed by Traefik and the Compose health check.
|
||||
- `traefik` service (enabled via the `reverse-proxy` profile) terminates TLS using the ACME resolver configured by `TRAEFIK_ACME_EMAIL` and routes `CALMINER_DOMAIN` traffic to the API.
|
||||
- `postgres` service (enabled via the `local-db` profile) exists for edge deployments without managed PostgreSQL and persists data in the `pg_data_prod` volume while mounting `./backups` for operator snapshots.
|
||||
- All services join the configurable `CALMINER_NETWORK` (defaults to `calminer_backend`) to keep traffic isolated from host networks.
|
||||
|
||||
Deployment workflow:
|
||||
|
||||
1. Copy `config/setup_production.env.example` to `config/setup_production.env` and populate domain, registry image tag, database credentials, and resource budgets.
|
||||
2. Launch the stack with `docker compose --env-file config/setup_production.env -f docker-compose.prod.yml --profile reverse-proxy up -d` (append `--profile local-db` when hosting Postgres locally).
|
||||
3. Run database migrations and seeding using `docker compose --env-file config/setup_production.env -f docker-compose.prod.yml run --rm api python scripts/setup_database.py --run-migrations --seed-data`.
|
||||
4. Monitor container health via `docker compose -f docker-compose.prod.yml ps` or Traefik dashboards; the API health endpoint returns `{ "status": "ok" }` when ready.
|
||||
5. Shut down with `docker compose -f docker-compose.prod.yml down` (volumes persist unless `-v` is supplied).
|
||||
|
||||
## Containerized Deployment Flow
|
||||
|
||||
The Docker-based deployment path aligns with the solution strategy documented in [Solution Strategy](04_solution_strategy.md) and the CI practices captured in [Testing & CI](07_deployment/07_01_testing_ci.md.md).
|
||||
|
||||
@@ -4,6 +4,13 @@ This document contains the expanded development, usage, testing, and migration g
|
||||
|
||||
## Development
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Python 3.10+
|
||||
- Node.js 20+ (for Playwright-driven E2E tests)
|
||||
- Docker (optional, required for containerized workflows)
|
||||
- Git
|
||||
|
||||
To get started locally:
|
||||
|
||||
```powershell
|
||||
@@ -47,6 +54,99 @@ docker run --rm -p 8000:8000 ^
|
||||
|
||||
If you maintain a Postgres or Redis dependency locally, consider authoring a `docker compose` stack that pairs them with the app container. The Docker image expects the database to be reachable and migrations executed before serving traffic.
|
||||
|
||||
### Compose-driven development stack
|
||||
|
||||
The repository ships with `docker-compose.dev.yml`, wiring the API and database into a single development stack. It defaults to the Debian-based `postgres:16` image so UTF-8 locales are available without additional tooling and mounts persistent data in the `pg_data_dev` volume.
|
||||
|
||||
Typical workflow (run from the repository root):
|
||||
|
||||
```powershell
|
||||
# Build images and ensure dependencies are cached
|
||||
docker compose -f docker-compose.dev.yml build
|
||||
|
||||
# Start FastAPI and Postgres in the background
|
||||
docker compose -f docker-compose.dev.yml up -d
|
||||
|
||||
# Tail logs for both services
|
||||
docker compose -f docker-compose.dev.yml logs -f
|
||||
|
||||
# Stop services but keep the database volume for reuse
|
||||
docker compose -f docker-compose.dev.yml down
|
||||
|
||||
# Remove the persistent Postgres volume when you need a clean slate
|
||||
docker volume rm calminer_pg_data_dev # optional; confirm exact name with `docker volume ls`
|
||||
```
|
||||
|
||||
Environment variables used by the containers live directly in the compose file (`DATABASE_HOST=db`, `DATABASE_NAME=calminer_dev`, etc.), so no extra `.env` file is required. Adjust or override them via `docker compose ... -e VAR=value` if necessary.
|
||||
|
||||
For a deeper walkthrough (including volume naming conventions, port mappings, and how the stack fits into the broader architecture), cross-check `docs/architecture/15_development_setup.md`. That chapter mirrors the compose defaults captured here so both documents stay in sync.
|
||||
|
||||
### Compose-driven test stack
|
||||
|
||||
Use `docker-compose.test.yml` to spin up a Postgres 16 container and execute the Python test suite in a disposable worker container:
|
||||
|
||||
```powershell
|
||||
# Build images used by the test workflow
|
||||
docker compose -f docker-compose.test.yml build
|
||||
|
||||
# Run the default target (unit tests)
|
||||
docker compose -f docker-compose.test.yml run --rm tests
|
||||
|
||||
# Run a specific target (e.g., full suite)
|
||||
docker compose -f docker-compose.test.yml run --rm -e PYTEST_TARGET=tests tests
|
||||
|
||||
# Tear everything down and drop the test database volume
|
||||
docker compose -f docker-compose.test.yml down -v
|
||||
```
|
||||
|
||||
The `tests` service prepares the database via `scripts/setup_database.py` before invoking pytest, ensuring migrations and seed data mirror CI behaviour. Named volumes (`pip_cache_test`, `pg_data_test`) cache dependencies and data between runs; remove them with `down -v` whenever you want a pristine environment. An `api` service is available on `http://localhost:8001` for spot-checking API responses against the same test database.
|
||||
|
||||
### Compose-driven production stack
|
||||
|
||||
Use `docker-compose.prod.yml` for operator-managed deployments. The file defines:
|
||||
|
||||
- `api`: FastAPI container with configurable CPU/memory limits and a `/health` probe.
|
||||
- `traefik`: Optional (enable with the `reverse-proxy` profile) to terminate TLS and route traffic based on `CALMINER_DOMAIN`.
|
||||
- `postgres`: Optional (enable with the `local-db` profile) when a managed database is unavailable; persists data in `pg_data_prod` and mounts `./backups`.
|
||||
|
||||
Commands (run from the repository root):
|
||||
|
||||
```powershell
|
||||
# Prepare environment variables once per environment
|
||||
copy config\setup_production.env.example config\setup_production.env
|
||||
|
||||
# Start API behind Traefik
|
||||
docker compose ^
|
||||
--env-file config/setup_production.env ^
|
||||
-f docker-compose.prod.yml ^
|
||||
--profile reverse-proxy ^
|
||||
up -d
|
||||
|
||||
# Add the local Postgres profile when running without managed DB
|
||||
docker compose ^
|
||||
--env-file config/setup_production.env ^
|
||||
-f docker-compose.prod.yml ^
|
||||
--profile reverse-proxy --profile local-db ^
|
||||
up -d
|
||||
|
||||
# Apply migrations/seed data
|
||||
docker compose ^
|
||||
--env-file config/setup_production.env ^
|
||||
-f docker-compose.prod.yml ^
|
||||
run --rm api ^
|
||||
python scripts/setup_database.py --run-migrations --seed-data
|
||||
|
||||
# Check health (FastAPI exposes /health)
|
||||
docker compose -f docker-compose.prod.yml ps
|
||||
|
||||
# Stop services (volumes persist unless -v is supplied)
|
||||
docker compose -f docker-compose.prod.yml down
|
||||
```
|
||||
|
||||
Key environment variables (documented in `config/setup_production.env.example`): container image tag, domain/ACME email, published ports, network name, and resource limits (`API_LIMIT_CPUS`, `API_LIMIT_MEMORY`, etc.).
|
||||
|
||||
For deployment topology diagrams and operational sequencing, see [docs/architecture/07_deployment_view.md](architecture/07_deployment_view.md#production-docker-compose-topology).
|
||||
|
||||
## Usage Overview
|
||||
|
||||
- **API base URL**: `http://localhost:8000/api`
|
||||
@@ -98,7 +198,7 @@ python scripts/setup_database.py --run-migrations --seed-data
|
||||
The dry-run invocation reports which steps would execute without making changes. The live run applies the baseline (if not already recorded in `schema_migrations`) and seeds the reference data relied upon by the UI and API.
|
||||
|
||||
> ℹ️ When `--seed-data` is supplied without `--run-migrations`, the bootstrap script automatically applies any pending SQL migrations first so the `application_setting` table (and future settings-backed features) are present before seeding.
|
||||
|
||||
>
|
||||
> ℹ️ The application still accepts `DATABASE_URL` as a fallback if the granular variables are not set.
|
||||
|
||||
## Database bootstrap workflow
|
||||
|
||||
Reference in New Issue
Block a user