3 Commits

Author SHA1 Message Date
300ecebe23 Merge pull request 'fest/ci-improvement' (#3) from fest/ci-improvement into main
All checks were successful
Run Tests / e2e tests (push) Successful in 1m48s
Run Tests / unit tests (push) Successful in 10s
Reviewed-on: #3
2025-10-25 22:03:20 +02:00
70db34d088 feat: Implement composite action for Python environment setup and refactor test workflow to utilize it
All checks were successful
Run Tests / e2e tests (push) Successful in 1m48s
Run Tests / unit tests (push) Successful in 10s
2025-10-25 22:00:28 +02:00
0550928a2f feat: Update CI workflows for Docker image build and deployment, enhance test configurations, and add testing documentation
All checks were successful
Run Tests / e2e tests (push) Successful in 1m49s
Run Tests / unit tests (push) Successful in 11s
2025-10-25 21:28:49 +02:00
10 changed files with 413 additions and 241 deletions

View File

@@ -0,0 +1,111 @@
name: Setup Python Environment
description: Configure Python, proxies, dependencies, and optional database setup for CI jobs.
author: CalMiner Team
inputs:
python-version:
description: Python version to install.
required: false
default: "3.10"
install-playwright:
description: Install Playwright browsers when true.
required: false
default: "false"
install-requirements:
description: Space-delimited list of requirement files to install.
required: false
default: "requirements.txt requirements-test.txt"
run-db-setup:
description: Run database wait and setup scripts when true.
required: false
default: "true"
db-dry-run:
description: Execute setup script dry run before live run when true.
required: false
default: "true"
runs:
using: composite
steps:
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: ${{ inputs.python-version }}
- name: Configure apt proxy
shell: bash
run: |
set -euo pipefail
PROXY_HOST="http://apt-cacher:3142"
if ! curl -fsS --connect-timeout 3 "${PROXY_HOST}" >/dev/null; then
PROXY_HOST="http://192.168.88.14:3142"
fi
echo "Using APT proxy ${PROXY_HOST}"
{
echo "http_proxy=${PROXY_HOST}"
echo "https_proxy=${PROXY_HOST}"
echo "HTTP_PROXY=${PROXY_HOST}"
echo "HTTPS_PROXY=${PROXY_HOST}"
} >> "$GITHUB_ENV"
sudo tee /etc/apt/apt.conf.d/01proxy >/dev/null <<EOF
Acquire::http::Proxy "${PROXY_HOST}";
Acquire::https::Proxy "${PROXY_HOST}";
EOF
- name: Install dependencies
shell: bash
run: |
set -euo pipefail
requirements="${{ inputs.install-requirements }}"
if [ -n "${requirements}" ]; then
for requirement in ${requirements}; do
if [ -f "${requirement}" ]; then
pip install -r "${requirement}"
else
echo "Requirement file ${requirement} not found" >&2
exit 1
fi
done
fi
- name: Install Playwright browsers
if: ${{ inputs.install-playwright == 'true' }}
shell: bash
run: |
set -euo pipefail
python -m playwright install --with-deps
- name: Wait for database service
if: ${{ inputs.run-db-setup == 'true' }}
shell: bash
run: |
set -euo pipefail
python - <<'PY'
import os
import time
import psycopg2
dsn = (
f"dbname={os.environ['DATABASE_SUPERUSER_DB']} "
f"user={os.environ['DATABASE_SUPERUSER']} "
f"password={os.environ['DATABASE_SUPERUSER_PASSWORD']} "
f"host={os.environ['DATABASE_HOST']} "
f"port={os.environ['DATABASE_PORT']}"
)
for attempt in range(30):
try:
with psycopg2.connect(dsn):
break
except psycopg2.OperationalError:
time.sleep(2)
else:
raise SystemExit("Postgres service did not become available")
PY
- name: Run database setup (dry run)
if: ${{ inputs.run-db-setup == 'true' && inputs.db-dry-run == 'true' }}
shell: bash
run: |
set -euo pipefail
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v
- name: Run database setup
if: ${{ inputs.run-db-setup == 'true' }}
shell: bash
run: |
set -euo pipefail
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v

View File

@@ -1,11 +1,16 @@
name: Build and Push Docker Image
on:
push:
workflow_run:
workflows:
- Run Tests
branches:
- main
types:
- completed
jobs:
build-and-push:
if: ${{ github.event.workflow_run.conclusion == 'success' }}
runs-on: ubuntu-latest
env:
DEFAULT_BRANCH: main
@@ -14,6 +19,8 @@ jobs:
REGISTRY_URL: ${{ secrets.REGISTRY_URL }}
REGISTRY_USERNAME: ${{ secrets.REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.REGISTRY_PASSWORD }}
WORKFLOW_RUN_HEAD_BRANCH: ${{ github.event.workflow_run.head_branch }}
WORKFLOW_RUN_HEAD_SHA: ${{ github.event.workflow_run.head_sha }}
steps:
- name: Checkout code
uses: actions/checkout@v4
@@ -26,6 +33,14 @@ jobs:
event_name="${GITHUB_EVENT_NAME:-}"
sha="${GITHUB_SHA:-}"
if [ -z "$ref_name" ] && [ -n "${WORKFLOW_RUN_HEAD_BRANCH:-}" ]; then
ref_name="${WORKFLOW_RUN_HEAD_BRANCH}"
fi
if [ -z "$sha" ] && [ -n "${WORKFLOW_RUN_HEAD_SHA:-}" ]; then
sha="${WORKFLOW_RUN_HEAD_SHA}"
fi
if [ "$ref_name" = "${DEFAULT_BRANCH:-main}" ]; then
echo "on_default=true" >> "$GITHUB_OUTPUT"
else

View File

@@ -1,11 +1,16 @@
name: Deploy to Server
on:
push:
workflow_run:
workflows:
- Build and Push Docker Image
branches:
- main
types:
- completed
jobs:
deploy:
if: ${{ github.event.workflow_run.conclusion == 'success' }}
runs-on: ubuntu-latest
env:
DEFAULT_BRANCH: main
@@ -14,6 +19,8 @@ jobs:
REGISTRY_URL: ${{ secrets.REGISTRY_URL }}
REGISTRY_USERNAME: ${{ secrets.REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.REGISTRY_PASSWORD }}
WORKFLOW_RUN_HEAD_BRANCH: ${{ github.event.workflow_run.head_branch }}
WORKFLOW_RUN_HEAD_SHA: ${{ github.event.workflow_run.head_sha }}
steps:
- name: SSH and deploy
uses: appleboy/ssh-action@master
@@ -22,7 +29,15 @@ jobs:
username: ${{ secrets.SSH_USERNAME }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
docker pull ${{ env.REGISTRY_URL }}/${{ env.REGISTRY_ORG }}/${{ env.REGISTRY_IMAGE_NAME }}:latest
IMAGE_SHA="${{ env.WORKFLOW_RUN_HEAD_SHA }}"
IMAGE_PATH="${{ env.REGISTRY_URL }}/${{ env.REGISTRY_ORG }}/${{ env.REGISTRY_IMAGE_NAME }}"
if [ -z "$IMAGE_SHA" ]; then
echo "Missing workflow run head SHA; aborting deployment." >&2
exit 1
fi
docker pull "$IMAGE_PATH:$IMAGE_SHA"
docker stop calminer || true
docker rm calminer || true
docker run -d --name calminer -p 8000:8000 \
@@ -33,4 +48,4 @@ jobs:
-e DATABASE_PASSWORD=${{ secrets.DATABASE_PASSWORD }} \
-e DATABASE_NAME=${{ secrets.DATABASE_NAME }} \
-e DATABASE_SCHEMA=${{ secrets.DATABASE_SCHEMA }} \
${{ secrets.REGISTRY_URL }}/${{ secrets.REGISTRY_USERNAME }}/calminer:latest
"$IMAGE_PATH:$IMAGE_SHA"

View File

@@ -2,7 +2,25 @@ name: Run Tests
on: [push]
jobs:
test:
tests:
name: ${{ matrix.target }} tests
runs-on: ubuntu-latest
env:
DATABASE_DRIVER: postgresql
DATABASE_HOST: postgres
DATABASE_PORT: "5432"
DATABASE_NAME: calminer_ci
DATABASE_USER: calminer
DATABASE_PASSWORD: secret
DATABASE_SCHEMA: public
DATABASE_SUPERUSER: calminer
DATABASE_SUPERUSER_PASSWORD: secret
DATABASE_SUPERUSER_DB: calminer_ci
DATABASE_URL: postgresql+psycopg2://calminer:secret@postgres:5432/calminer_ci
strategy:
fail-fast: false
matrix:
target: [unit, e2e]
services:
postgres:
image: postgres:16-alpine
@@ -10,116 +28,22 @@ jobs:
POSTGRES_DB: calminer_ci
POSTGRES_USER: calminer
POSTGRES_PASSWORD: secret
ports:
- 5432:5432
options: >-
--health-cmd "pg_isready -U calminer -d calminer_ci"
--health-interval 10s
--health-timeout 5s
--health-retries 10
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
- name: Prepare Python environment
uses: ./.gitea/actions/setup-python-env
with:
python-version: "3.10"
- name: Configure apt proxy
run: |
set -euo pipefail
PROXY_HOST="http://apt-cacher:3142"
if ! curl -fsS --connect-timeout 3 "${PROXY_HOST}" >/dev/null; then
PROXY_HOST="http://192.168.88.14:3142"
fi
echo "Using APT proxy ${PROXY_HOST}"
echo "http_proxy=${PROXY_HOST}" >> "$GITHUB_ENV"
echo "https_proxy=${PROXY_HOST}" >> "$GITHUB_ENV"
echo "HTTP_PROXY=${PROXY_HOST}" >> "$GITHUB_ENV"
echo "HTTPS_PROXY=${PROXY_HOST}" >> "$GITHUB_ENV"
sudo tee /etc/apt/apt.conf.d/01proxy >/dev/null <<EOF
Acquire::http::Proxy "${PROXY_HOST}";
Acquire::https::Proxy "${PROXY_HOST}";
EOF
# - name: Cache pip
# uses: actions/cache@v4
# with:
# path: ~/.cache/pip
# key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt', 'requirements-test.txt') }}
# restore-keys: |
# ${{ runner.os }}-pip-${{ hashFiles('requirements.txt') }}
# ${{ runner.os }}-pip-
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install -r requirements-test.txt
- name: Install Playwright browsers
run: |
python -m playwright install --with-deps
- name: Wait for database service
env:
DATABASE_DRIVER: postgresql
DATABASE_HOST: postgres
DATABASE_PORT: "5432"
DATABASE_NAME: calminer_ci
DATABASE_USER: calminer
DATABASE_PASSWORD: secret
DATABASE_SCHEMA: public
DATABASE_SUPERUSER: calminer
DATABASE_SUPERUSER_PASSWORD: secret
DATABASE_SUPERUSER_DB: calminer_ci
run: |
python - <<'PY'
import os
import time
import psycopg2
dsn = (
f"dbname={os.environ['DATABASE_SUPERUSER_DB']} "
f"user={os.environ['DATABASE_SUPERUSER']} "
f"password={os.environ['DATABASE_SUPERUSER_PASSWORD']} "
f"host={os.environ['DATABASE_HOST']} "
f"port={os.environ['DATABASE_PORT']}"
)
for attempt in range(30):
try:
with psycopg2.connect(dsn):
break
except psycopg2.OperationalError:
time.sleep(2)
else:
raise SystemExit("Postgres service did not become available")
PY
- name: Run database setup (dry run)
env:
DATABASE_DRIVER: postgresql
DATABASE_HOST: postgres
DATABASE_PORT: "5432"
DATABASE_NAME: calminer_ci
DATABASE_USER: calminer
DATABASE_PASSWORD: secret
DATABASE_SCHEMA: public
DATABASE_SUPERUSER: calminer
DATABASE_SUPERUSER_PASSWORD: secret
DATABASE_SUPERUSER_DB: calminer_ci
run: python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v
- name: Run database setup
env:
DATABASE_DRIVER: postgresql
DATABASE_HOST: postgres
DATABASE_PORT: "5432"
DATABASE_NAME: calminer_ci
DATABASE_USER: calminer
DATABASE_PASSWORD: secret
DATABASE_SCHEMA: public
DATABASE_SUPERUSER: calminer
DATABASE_SUPERUSER_PASSWORD: secret
DATABASE_SUPERUSER_DB: calminer_ci
run: python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v
install-playwright: ${{ matrix.target == 'e2e' }}
- name: Run tests
env:
DATABASE_URL: postgresql+psycopg2://calminer:secret@postgres:5432/calminer_ci
DATABASE_SCHEMA: public
run: pytest
run: |
if [ "${{ matrix.target }}" = "unit" ]; then
pytest tests/unit
else
pytest tests/e2e
fi

3
.gitignore vendored
View File

@@ -45,3 +45,6 @@ logs/
# SQLite database
*.sqlite3
test*.db
# Docker files
.runner

View File

@@ -0,0 +1,218 @@
# Testing, CI and Quality Assurance
This chapter centralizes the project's testing strategy, CI configuration, and quality targets.
## Overview
CalMiner uses a combination of unit, integration, and end-to-end tests to ensure quality.
### Frameworks
- Backend: pytest for unit and integration tests.
- Frontend: pytest with Playwright for E2E tests.
- Database: pytest fixtures with psycopg2 for DB tests.
### Test Types
- Unit Tests: Test individual functions/modules.
- Integration Tests: Test API endpoints and DB interactions.
- E2E Tests: Playwright for full user flows.
### CI/CD
- Use Gitea Actions for CI/CD; workflows live under `.gitea/workflows/`.
- `test.yml` runs on every push, provisions a temporary Postgres 16 service, waits for readiness, executes the setup script in dry-run and live modes, then fans out into parallel matrix jobs for unit (`pytest tests/unit`) and end-to-end (`pytest tests/e2e`) suites. Playwright browsers install only for the E2E job.
- `build-and-push.yml` runs only after the **Run Tests** workflow finishes successfully (triggered via `workflow_run` on `main`). Once tests pass, it builds the Docker image with `docker/build-push-action@v2`, reuses cache-backed layers, and pushes to the Gitea registry.
- `deploy.yml` runs only after the build workflow reports success on `main`. It connects to the target host (via `appleboy/ssh-action`), pulls the Docker image tagged with the build commit SHA, and restarts the container with that exact image reference.
- Mandatory secrets: `REGISTRY_USERNAME`, `REGISTRY_PASSWORD`, `REGISTRY_URL`, `SSH_HOST`, `SSH_USERNAME`, `SSH_PRIVATE_KEY`.
- Run tests on pull requests to shared branches; enforce coverage target ≥80% (pytest-cov).
### Running Tests
- Unit: `pytest tests/unit/`
- E2E: `pytest tests/e2e/`
- All: `pytest`
### Test Directory Structure
Organize tests under the `tests/` directory mirroring the application structure:
```text
tests/
unit/
test_<module>.py
e2e/
test_<flow>.py
fixtures/
conftest.py
```
### Fixtures and Test Data
- Define reusable fixtures in `tests/fixtures/conftest.py`.
- Use temporary in-memory databases or isolated schemas for DB tests.
- Load sample data via fixtures for consistent test environments.
- Leverage the `seeded_ui_data` fixture in `tests/unit/conftest.py` to populate scenarios with related cost, maintenance, and simulation records for deterministic UI route checks.
### E2E (Playwright) Tests
The E2E test suite, located in `tests/e2e/`, uses Playwright to simulate user interactions in a live browser environment. These tests are designed to catch issues in the UI, frontend-backend integration, and overall application flow.
#### Fixtures
- `live_server`: A session-scoped fixture that launches the FastAPI application in a separate process, making it accessible to the browser.
- `playwright_instance`, `browser`, `page`: Standard `pytest-playwright` fixtures for managing the Playwright instance, browser, and individual pages.
#### Smoke Tests
- UI Page Loading: `test_smoke.py` contains a parameterized test that systematically navigates to all UI routes to ensure they load without errors, have the correct title, and display a primary heading.
- Form Submissions: Each major form in the application has a corresponding test file (e.g., `test_scenarios.py`, `test_costs.py`) that verifies: page loads, create item by filling the form, success message, and UI updates.
### Running E2E Tests
To run the Playwright tests:
```bash
pytest tests/e2e/
````
To run headed mode:
```bash
pytest tests/e2e/ --headed
```
### Mocking and Dependency Injection
- Use `unittest.mock` to mock external dependencies.
- Inject dependencies via function parameters or FastAPI's dependency overrides in tests.
### Code Coverage
- Install `pytest-cov` to generate coverage reports.
- Run with coverage: `pytest --cov --cov-report=term` (use `--cov-report=html` when visualizing hotspots).
- Target 95%+ overall coverage. Focus on historically low modules: `services/simulation.py`, `services/reporting.py`, `middleware/validation.py`, and `routes/ui.py`.
- Latest snapshot (2025-10-21): `pytest --cov=. --cov-report=term-missing` returns **91%** overall coverage.
### CI Integration
`test.yml` encapsulates the steps below:
- Check out the repository and set up Python 3.10.
- Configure the runner's apt proxy (if available), install project dependencies (requirements + test extras), and download Playwright browsers.
- Run `pytest` (extend with `--cov` flags when enforcing coverage).
> The pip cache step is temporarily disabled in `test.yml` until the self-hosted cache service is exposed (see `docs/ci-cache-troubleshooting.md`).
`build-and-push.yml` adds:
- Registry login using repository secrets.
- Docker image build/push with GHA cache storage (`cache-from/cache-to` set to `type=gha`).
`deploy.yml` handles:
- SSH into the deployment host.
- Pull the tagged image from the registry.
- Stop, remove, and relaunch the `calminer` container exposing port 8000.
When adding new workflows, mirror this structure to ensure secrets, caching, and deployment steps remain aligned with the production environment.
## CI Owner Coordination Notes
### Key Findings
- Self-hosted runner: ASUS System Product Name chassis with AMD Ryzen 7 7700X (8 physical cores / 16 threads) and 63.2 GB usable RAM; `act_runner` configuration not overridden, so only one workflow job runs concurrently today.
- Unit test matrix job: completes 117 pytest cases in roughly 4.1 seconds after Postgres spins up; Docker services consume ~150 MB for `postgres:16-alpine`, with minimal sustained CPU load once tests begin.
- End-to-end matrix job: `pytest tests/e2e` averages 2122 seconds of execution, but a cold run downloads ~179 MB of apt packages plus ~470 MB of Playwright browser bundles (Chromium, Firefox, WebKit, FFmpeg), exceeding 650 MB network transfer and adding several gigabytes of disk writes if caches are absent.
- Both jobs reuse existing Python package caches when available; absent a shared cache service, repeated Playwright installs remain the dominant cost driver for cold executions.
### Open Questions
- Can we raise the runner concurrency above the default single job, or provision an additional runner, so the test matrix can execute without serializing queued workflows?
- Is there a central cache or artifact service available for Python wheels and Playwright browser bundles to avoid ~650 MB downloads on cold starts?
- Are we permitted to bake Playwright browsers into the base runner image, or should we pursue a shared cache/proxy solution instead?
### Outreach Draft
```text
Subject: CalMiner CI parallelization support
Hi <CI Owner>,
We recently updated the CalMiner test workflow to fan out unit and Playwright E2E suites in parallel. While validating the change, we gathered the following:
- Runner host: ASUS System Product Name with AMD Ryzen 7 7700X (8 cores / 16 threads), ~63 GB RAM, default `act_runner` concurrency (1 job at a time).
- Unit job finishes in ~4.1 s once Postgres is ready; light CPU and network usage.
- E2E job finishes in ~22 s, but a cold run pulls ~179 MB of apt packages plus ~470 MB of Playwright browser payloads (>650 MB download, several GB disk writes) because we do not have a shared cache yet.
To move forward, could you help with the following?
1. Confirm whether we can raise the runner concurrency limit or provision an additional runner so parallel jobs do not queue behind one another.
2. Let us know if a central cache (Artifactory, Nexus, etc.) is available for Python wheels and Playwright browser bundles, or if we should consider baking the browsers into the runner image instead.
3. Share any guidance on preferred caching or proxy solutions for large binary installs on self-hosted runners.
Once we have clarity, we can finalize the parallel rollout and update the documentation accordingly.
Thanks,
<Your Name>
```
## Workflow Optimization Opportunities
### `test.yml`
- Run the apt-proxy setup once via a composite action or preconfigured runner image if additional matrix jobs are added.
- Collapse dependency installation into a single `pip install -r requirements-test.txt` call (includes base requirements) once caching is restored.
- Investigate caching or pre-baking Playwright browser binaries to eliminate >650 MB cold downloads per run.
### `build-and-push.yml`
- Skip QEMU setup or explicitly constrain Buildx to linux/amd64 to reduce startup time.
- Enable `cache-from` / `cache-to` settings (registry or `type=gha`) to reuse Docker build layers between runs.
### `deploy.yml`
- Extract deployment script into a reusable shell script or compose file to minimize inline secrets and ease multi-environment scaling.
- Add a post-deploy health check (e.g., `curl` readiness probe) before declaring success.
### Priority Overview
1. Restore shared caching for Python wheels and Playwright browsers once infrastructure exposes the cache service (highest impact on runtime and bandwidth; requires coordination with CI owners).
2. Enable Docker layer caching in `build-and-push.yml` to shorten build cycles (medium effort, immediate benefit to release workflows).
3. Add post-deploy health verification to `deploy.yml` (low effort, improves confidence in automation).
4. Streamline redundant setup steps in `test.yml` (medium effort once cache strategy is in place; consider composite actions or base image updates).
### Setup Consolidation Opportunities
- `Run Tests` matrix jobs each execute the apt proxy configuration, pip installs, database wait, and setup scripts. A composite action or shell script wrapper could centralize these routines and parameterize target-specific behavior (unit vs e2e) to avoid copy/paste maintenance as additional jobs (lint, type check) are introduced.
- Both the test and build workflows perform a `checkout` step; while unavoidable per workflow, shared git submodules or sparse checkout rules could be encapsulated in a composite action to keep options consistent.
- The database setup script currently runs twice (dry-run and live) for every matrix leg. Evaluate whether the dry-run remains necessary once migrations stabilize; if retained, consider adding an environment variable toggle to skip redundant seed operations for read-only suites (e.g., lint).
### Proposed Shared Setup Action
- Location: `.gitea/actions/setup-python-env/action.yml` (composite action).
- Inputs:
- `python-version` (default `3.10`): forwarded to `actions/setup-python`.
- `install-playwright` (default `false`): when `true`, run `python -m playwright install --with-deps`.
- `install-requirements` (default `requirements.txt requirements-test.txt`): space-delimited list pip installs iterate over.
- `run-db-setup` (default `true`): toggles database wait + setup scripts.
- `db-dry-run` (default `true`): controls whether the dry-run invocation executes.
- Steps encapsulated:
1. Set up Python via `actions/setup-python@v5` using provided version.
2. Configure apt proxy via shared shell snippet (with graceful fallback when proxy offline).
3. Iterate over requirement files and execute `pip install -r <file>`.
4. If `install-playwright == true`, install browsers.
5. If `run-db-setup == true`, run the wait-for-Postgres python snippet and call `scripts/setup_database.py`, honoring `db-dry-run` toggle.
- Usage sketch (in `test.yml`):
```yaml
- name: Prepare Python environment
uses: ./.gitea/actions/setup-python-env
with:
install-playwright: ${{ matrix.target == 'e2e' }}
db-dry-run: true
```
- Benefits: centralizes proxy logic and dependency installs, reduces duplication across matrix jobs, and keeps future lint/type-check jobs lightweight by disabling database setup.
- Implementation status: action available at `.gitea/actions/setup-python-env` and consumed by `test.yml`; extend to additional workflows as they adopt the shared routine.
- Obsolete steps removed: individual apt proxy, dependency install, Playwright, and database setup commands pruned from `test.yml` once the composite action was integrated.

View File

@@ -18,30 +18,34 @@ The CalMiner application is deployed using a multi-tier architecture consisting
```mermaid
graph TD
A[Client Layer<br/>(Web Browsers)] --> B[Web Application Layer<br/>(FastAPI)]
B --> C[Database Layer<br/>(PostgreSQL)]
A[Client Layer] --> B[Web Application Layer]
B --> C[Database Layer]
```
## Infrastructure Components
The infrastructure components for the application include:
- **Web Server**: Hosts the FastAPI application and serves API endpoints.
- **Database Server**: PostgreSQL database for persisting application data.
- **Static File Server**: Serves static assets such as CSS, JavaScript, and image files.
- **Reverse Proxy (optional)**: An Nginx or Apache server can be used as a reverse proxy.
- **Containerization**: Docker images are generated via the repository `Dockerfile`, using a multi-stage build to keep the final runtime minimal.
- **CI/CD Pipeline**: Automated pipelines (Gitea Actions) run tests, build/push Docker images, and trigger deployments.
- **Gitea Actions Workflows**: Located under `.gitea/workflows/`, these workflows handle testing, building, pushing, and deploying the application.
- **Gitea Action Runners**: Self-hosted runners execute the CI/CD workflows.
- **Testing and Continuous Integration**: Automated tests ensure code quality before deployment, also documented in [Testing & CI](07_deployment/07_01_testing_ci.md.md).
- **Docker Infrastructure**: Docker is used to containerize the application for consistent deployment across environments.
- **Portainer**: Production deployment environment for managing Docker containers.
- **Web Server**: Hosts the FastAPI application and serves API endpoints.
- **Database Server**: PostgreSQL database for persisting application data.
- **Static File Server**: Serves static assets such as CSS, JavaScript, and image files.
- **Cloud Infrastructure (optional)**: The application can be deployed on cloud platforms.
```mermaid
graph TD
A[Web Server] --> B[Database Server]
A --> C[Static File Server]
A --> D[Reverse Proxy]
A --> E[Containerization]
A --> F[CI/CD Pipeline]
A --> G[Cloud Infrastructure]
W[Web Server] --> DB[Database Server]
W --> S[Static File Server]
P[Reverse Proxy] --> W
C[CI/CD Pipeline] --> W
F[Containerization] --> W
```
## Environments
@@ -74,7 +78,7 @@ The production environment is set up for serving live traffic and includes:
## Containerized Deployment Flow
The Docker-based deployment path aligns with the solution strategy documented in [04 — Solution Strategy](04_solution_strategy.md) and the CI practices captured in [14 — Testing & CI](14_testing_ci.md).
The Docker-based deployment path aligns with the solution strategy documented in [Solution Strategy](04_solution_strategy.md) and the CI practices captured in [Testing & CI](07_deployment/07_01_testing_ci.md.md).
### Image Build
@@ -95,7 +99,7 @@ The Docker-based deployment path aligns with the solution strategy documented in
- `build-and-push.yml` logs into the container registry, rebuilds the Docker image using GitHub Actions cache-backed layers, and pushes `latest` (and additional tags as required).
- `deploy.yml` connects to the target host via SSH, pulls the pushed tag, stops any existing container, and launches the new version.
- Required secrets: `REGISTRY_URL`, `REGISTRY_USERNAME`, `REGISTRY_PASSWORD`, `SSH_HOST`, `SSH_USERNAME`, `SSH_PRIVATE_KEY`.
- Extend these workflows when introducing staging/blue-green deployments; keep cross-links with [14 — Testing & CI](14_testing_ci.md) up to date.
- Extend these workflows when introducing staging/blue-green deployments; keep cross-links with [Testing & CI](07_deployment/07_01_testing_ci.md.md) up to date.
## Integrations and Future Work (deployment-related)

View File

@@ -1,118 +0,0 @@
# 14 Testing, CI and Quality Assurance
This chapter centralizes the project's testing strategy, CI configuration, and quality targets.
## Overview
CalMiner uses a combination of unit, integration, and end-to-end tests to ensure quality.
### Frameworks
- Backend: pytest for unit and integration tests.
- Frontend: pytest with Playwright for E2E tests.
- Database: pytest fixtures with psycopg2 for DB tests.
### Test Types
- Unit Tests: Test individual functions/modules.
- Integration Tests: Test API endpoints and DB interactions.
- E2E Tests: Playwright for full user flows.
### CI/CD
- Use Gitea Actions for CI/CD; workflows live under `.gitea/workflows/`.
- `test.yml` runs on every push, provisions a temporary Postgres 16 service, waits for readiness, executes the setup script in dry-run and live modes, installs Playwright browsers, and finally runs the full pytest suite.
- `build-and-push.yml` builds the Docker image with `docker/build-push-action@v2`, reusing GitHub Actions cache-backed layers, and pushes to the Gitea registry.
- `deploy.yml` connects to the target host (via `appleboy/ssh-action`) to pull the freshly pushed image and restart the container.
- Mandatory secrets: `REGISTRY_USERNAME`, `REGISTRY_PASSWORD`, `REGISTRY_URL`, `SSH_HOST`, `SSH_USERNAME`, `SSH_PRIVATE_KEY`.
- Run tests on pull requests to shared branches; enforce coverage target ≥80% (pytest-cov).
### Running Tests
- Unit: `pytest tests/unit/`
- E2E: `pytest tests/e2e/`
- All: `pytest`
### Test Directory Structure
Organize tests under the `tests/` directory mirroring the application structure:
````text
tests/
unit/
test_<module>.py
e2e/
test_<flow>.py
fixtures/
conftest.py
```python
### Fixtures and Test Data
- Define reusable fixtures in `tests/fixtures/conftest.py`.
- Use temporary in-memory databases or isolated schemas for DB tests.
- Load sample data via fixtures for consistent test environments.
- Leverage the `seeded_ui_data` fixture in `tests/unit/conftest.py` to populate scenarios with related cost, maintenance, and simulation records for deterministic UI route checks.
### E2E (Playwright) Tests
The E2E test suite, located in `tests/e2e/`, uses Playwright to simulate user interactions in a live browser environment. These tests are designed to catch issues in the UI, frontend-backend integration, and overall application flow.
#### Fixtures
- `live_server`: A session-scoped fixture that launches the FastAPI application in a separate process, making it accessible to the browser.
- `playwright_instance`, `browser`, `page`: Standard `pytest-playwright` fixtures for managing the Playwright instance, browser, and individual pages.
#### Smoke Tests
- UI Page Loading: `test_smoke.py` contains a parameterized test that systematically navigates to all UI routes to ensure they load without errors, have the correct title, and display a primary heading.
- Form Submissions: Each major form in the application has a corresponding test file (e.g., `test_scenarios.py`, `test_costs.py`) that verifies: page loads, create item by filling the form, success message, and UI updates.
### Running E2E Tests
To run the Playwright tests:
```bash
pytest tests/e2e/
````
To run headed mode:
```bash
pytest tests/e2e/ --headed
```
### Mocking and Dependency Injection
- Use `unittest.mock` to mock external dependencies.
- Inject dependencies via function parameters or FastAPI's dependency overrides in tests.
### Code Coverage
- Install `pytest-cov` to generate coverage reports.
- Run with coverage: `pytest --cov --cov-report=term` (use `--cov-report=html` when visualizing hotspots).
- Target 95%+ overall coverage. Focus on historically low modules: `services/simulation.py`, `services/reporting.py`, `middleware/validation.py`, and `routes/ui.py`.
- Latest snapshot (2025-10-21): `pytest --cov=. --cov-report=term-missing` returns **91%** overall coverage.
### CI Integration
`test.yml` encapsulates the steps below:
- Check out the repository and set up Python 3.10.
- Configure the runner's apt proxy (if available), install project dependencies (requirements + test extras), and download Playwright browsers.
- Run `pytest` (extend with `--cov` flags when enforcing coverage).
> The pip cache step is temporarily disabled in `test.yml` until the self-hosted cache service is exposed (see `docs/ci-cache-troubleshooting.md`).
`build-and-push.yml` adds:
- Registry login using repository secrets.
- Docker image build/push with GHA cache storage (`cache-from/cache-to` set to `type=gha`).
`deploy.yml` handles:
- SSH into the deployment host.
- Pull the tagged image from the registry.
- Stop, remove, and relaunch the `calminer` container exposing port 8000.
When adding new workflows, mirror this structure to ensure secrets, caching, and deployment steps remain aligned with the production environment.

View File

@@ -16,11 +16,11 @@ This folder mirrors the arc42 chapter structure (adapted to Markdown).
- [05 Building Block View](05_building_block_view.md)
- [06 Runtime View](06_runtime_view.md)
- [07 Deployment View](07_deployment_view.md)
- [Testing & CI](07_deployment/07_01_testing_ci.md.md)
- [08 Concepts](08_concepts.md)
- [09 Architecture Decisions](09_architecture_decisions.md)
- [10 Quality Requirements](10_quality_requirements.md)
- [11 Technical Risks](11_technical_risks.md)
- [12 Glossary](12_glossary.md)
- [13 UI and Style](13_ui_and_style.md)
- [14 Testing & CI](14_testing_ci.md)
- [15 Development Setup](15_development_setup.md)

View File

@@ -245,7 +245,7 @@ The database contains tables such as `capex`, `opex`, `chemical_consumption`, `f
## Where to look next
- Architecture overview & chapters: [architecture](architecture/README.md) (per-chapter files under `docs/architecture/`)
- [Testing & CI](architecture/14_testing_ci.md)
- [Testing & CI](architecture/07_deployment/07_01_testing_ci.md.md)
- [Development setup](architecture/15_development_setup.md)
- Implementation plan & roadmap: [Solution strategy](architecture/04_solution_strategy.md)
- Routes: [routes](../routes/)