9.4 KiB
title, description, status
| title | description | status |
|---|---|---|
| 07 — Deployment View | Describe deployment topology, infrastructure components, and environments (dev/stage/prod). | draft |
07 — Deployment View
Deployment Topology
The CalMiner application is deployed using a multi-tier architecture consisting of the following layers:
- Client Layer: This layer consists of web browsers that interact with the application through a user interface rendered by Jinja2 templates and enhanced with JavaScript (Chart.js for dashboards).
- Web Application Layer: This layer hosts the FastAPI application, which handles API requests, business logic, and serves HTML templates. It communicates with the database layer for data persistence.
- Database Layer: This layer consists of a PostgreSQL database that stores all application data, including scenarios, parameters, costs, consumption, production outputs, equipment, maintenance logs, and simulation results.
graph TD
A[Client Layer] --> B[Web Application Layer]
B --> C[Database Layer]
Infrastructure Components
The infrastructure components for the application include:
- Reverse Proxy (optional): An Nginx or Apache server can be used as a reverse proxy.
- Containerization: Docker images are generated via the repository
Dockerfile, using a multi-stage build to keep the final runtime minimal. - CI/CD Pipeline: Automated pipelines (Gitea Actions) run tests, build/push Docker images, and trigger deployments.
- Gitea Actions Workflows: Located under
.gitea/workflows/, these workflows handle testing, building, pushing, and deploying the application. - Gitea Action Runners: Self-hosted runners execute the CI/CD workflows.
- Testing and Continuous Integration: Automated tests ensure code quality before deployment, also documented in Testing & CI.
- Gitea Actions Workflows: Located under
- Docker Infrastructure: Docker is used to containerize the application for consistent deployment across environments.
- Portainer: Production deployment environment for managing Docker containers.
- Web Server: Hosts the FastAPI application and serves API endpoints.
- Database Server: PostgreSQL database for persisting application data.
- Static File Server: Serves static assets such as CSS, JavaScript, and image files.
- Cloud Infrastructure (optional): The application can be deployed on cloud platforms.
graph TD
G[Git Repository] --> C[CI/CD Pipeline]
C --> GAW[Gitea Action Workflows]
GAW --> GAR[Gitea Action Runners]
GAR --> T[Testing]
GAR --> CI[Continuous Integration]
T --> G
CI --> G
W[Web Server] --> DB[Database Server]
RP[Reverse Proxy] --> W
I((Internet)) <--> RP
PO[Containerization] --> W
C[CI/CD Pipeline] --> PO
W --> S[Static File Server]
S --> RP
PO --> DB
PO --> S
Environments
The application can be deployed in multiple environments to support development, testing, and production.
graph TD
R[Repository] --> DEV[Development Environment]
R[Repository] --> TEST[Testing Environment]
R[Repository] --> PROD[Production Environment]
DEV --> W_DEV[Web Server - Dev]
DEV --> DB_DEV[Database Server - Dev]
TEST --> W_TEST[Web Server - Test]
TEST --> DB_TEST[Database Server - Test]
PROD --> W_PROD[Web Server - Prod]
PROD --> DB_PROD[Database Server - Prod]
Development Environment
The development environment is set up for local development and testing. It includes:
- Local PostgreSQL instance (docker compose recommended, script available at
docker-compose.postgres.yml) - FastAPI server running in debug mode
docker-compose.dev.yml encapsulates this topology:
apiservice mounts the repository for live reloads (uvicorn --reload) and depends on the database health check.dbservice uses the Debian-basedpostgres:16image with UTF-8 locale configuration and persists data inpg_data_dev.- A shared
calminer_backendbridge network keeps traffic contained; ports 8000/5432 are published for local tooling.
See docs/quickstart.md for command examples and volume maintenance tips.
Testing Environment
The testing environment is set up for automated testing and quality assurance. It includes:
- Staging PostgreSQL instance
- FastAPI server running in testing mode
- Automated test suite (e.g., pytest) for running unit and integration tests
docker-compose.test.yml provisions an ephemeral CI-like stack:
testsservice builds the application image, installsrequirements-test.txt, runs the database setup script (dry-run + apply), then executes pytest.apiservice is available on port 8001 for manual verification against the test database.postgresservice seeds a disposable Postgres 16 instance with health checks and named volumes (pg_data_test,pip_cache_test).
Typical commands mirror the CI workflow (docker compose -f docker-compose.test.yml run --rm tests); the quickstart lists variations and teardown steps.
Production Environment
The production environment is set up for serving live traffic and includes:
- Production PostgreSQL instance
- FastAPI server running in production mode
- Load balancer (Traefik) for distributing incoming requests
- Monitoring and logging tools for tracking application performance
Production docker compose topology
docker-compose.prod.ymldefines the runtime topology for operator-managed deployments.apiservice runs the FastAPI image with resource limits (API_LIMIT_CPUS,API_LIMIT_MEMORY) and a/healthprobe consumed by Traefik and the Compose health check.traefikservice (enabled via thereverse-proxyprofile) terminates TLS using the ACME resolver configured byTRAEFIK_ACME_EMAILand routesCALMINER_DOMAINtraffic to the API.postgresservice (enabled via thelocal-dbprofile) exists for edge deployments without managed PostgreSQL and persists data in thepg_data_prodvolume while mounting./backupsfor operator snapshots.- All services join the configurable
CALMINER_NETWORK(defaults tocalminer_backend) to keep traffic isolated from host networks.
Deployment workflow:
- Copy
config/setup_production.env.exampletoconfig/setup_production.envand populate domain, registry image tag, database credentials, and resource budgets. - Launch the stack with
docker compose --env-file config/setup_production.env -f docker-compose.prod.yml --profile reverse-proxy up -d(append--profile local-dbwhen hosting Postgres locally). - Run database migrations and seeding using
docker compose --env-file config/setup_production.env -f docker-compose.prod.yml run --rm api python scripts/setup_database.py --run-migrations --seed-data. - Monitor container health via
docker compose -f docker-compose.prod.yml psor Traefik dashboards; the API health endpoint returns{ "status": "ok" }when ready. - Shut down with
docker compose -f docker-compose.prod.yml down(volumes persist unless-vis supplied).
Containerized Deployment Flow
The Docker-based deployment path aligns with the solution strategy documented in Solution Strategy and the CI practices captured in Testing & CI.
Image Build
- The multi-stage
Dockerfileinstalls dependencies in a builder layer (including system compilers and Python packages) and copies only the required runtime artifacts to the final image. - Build arguments are minimal; database configuration is supplied at runtime via granular variables (
DATABASE_DRIVER,DATABASE_HOST,DATABASE_PORT,DATABASE_USER,DATABASE_PASSWORD,DATABASE_NAME, optionalDATABASE_SCHEMA). Secrets and configuration should be passed via environment variables or an orchestrator. - The resulting image exposes port
8000and startsuvicorn main:app(see main README.md).
Runtime Environment
- For single-node deployments, run the container alongside PostgreSQL/Redis using Docker Compose or an equivalent orchestrator.
- A reverse proxy (Traefik) terminates TLS and forwards traffic to the container on port
8000. - Migrations must be applied prior to rolling out a new image; automation can hook into the deploy step to run
scripts/run_migrations.py.
CI/CD Integration
- Gitea Actions workflows reside under
.gitea/workflows/. test.ymlexecutes the pytest suite using cached pip dependencies.build-and-push.ymllogs into the container registry, rebuilds the Docker image using GitHub Actions cache-backed layers, and pusheslatest(and additional tags as required).deploy.ymlconnects to the target host via SSH, pulls the pushed tag, stops any existing container, and launches the new version.- Required secrets:
REGISTRY_URL,REGISTRY_USERNAME,REGISTRY_PASSWORD,SSH_HOST,SSH_USERNAME,SSH_PRIVATE_KEY. - Extend these workflows when introducing staging/blue-green deployments; keep cross-links with Testing & CI up to date.
Integrations and Future Work (deployment-related)
- Persistence of results:
/api/simulations/runcurrently returns in-memory results; next iteration should persist tosimulation_resultand reference scenarios. - Deployment: implement infrastructure-as-code (e.g., Terraform/Ansible) to provision the hosting environment and maintain parity across dev/stage/prod.