41 Commits

Author SHA1 Message Date
df1c971354 Merge https://git.allucanget.biz/allucanget/calminer into feat/ci-overhaul-20251029
Some checks failed
CI / test (pull_request) Failing after 2m15s
2025-11-02 16:29:25 +01:00
3a8aef04b0 fix: update database connection details in CI workflow for consistency 2025-11-02 16:29:19 +01:00
45d746d80a Merge pull request 'fix: update UVICORN_PORT and UVICORN_WORKERS in Dockerfile for consistency' (#10) from feat/ci-overhaul-20251029 into main
Some checks failed
CI / test (push) Failing after 4m22s
Reviewed-on: #10
2025-11-02 15:59:22 +01:00
f1bc7f06b9 fix: hardcode database connection details in CI workflow for consistency
Some checks failed
CI / test (pull_request) Failing after 1m54s
2025-11-02 13:40:06 +01:00
82e98efb1b fix: remove DB_PORT variable and use hardcoded value in CI workflow for consistency
Some checks failed
CI / test (pull_request) Failing after 2m24s
2025-11-02 13:09:09 +01:00
f91349dedd Merge branch 'main' into feat/ci-overhaul-20251029
Some checks failed
CI / test (pull_request) Failing after 2m47s
2025-11-02 13:02:18 +01:00
efee50fdc7 fix: update UVICORN_PORT and UVICORN_WORKERS in Dockerfile for consistency
Some checks failed
CI / test (pull_request) Failing after 2m39s
2025-11-02 12:23:26 +01:00
e254d50c0c Merge pull request 'fix: refactor database environment variables in CI workflow for consistency' (#9) from feat/ci-overhaul-20251029 into main
Some checks failed
CI / test (push) Failing after 1m55s
Reviewed-on: #9
2025-11-02 11:21:15 +01:00
6eef8424b7 fix: update DB_PORT to be a string in CI workflow for consistency
Some checks failed
CI / test (pull_request) Failing after 2m15s
2025-11-02 11:13:45 +01:00
c1f4902cf4 fix: update UVICORN_PORT to 8003 in Dockerfile and docker-compose.yml
Some checks failed
CI / test (pull_request) Failing after 3m19s
2025-11-02 11:07:28 +01:00
52450bc487 fix: refactor database environment variables in CI workflow for consistency
Some checks failed
CI / test (pull_request) Failing after 6m30s
2025-10-29 15:34:06 +01:00
c3449f1986 Merge pull request 'Add UI and styling documentation; remove idempotency and logging audits' (#8) from feat/ci-overhaul-20251029 into main
All checks were successful
CI / test (push) Successful in 2m22s
Reviewed-on: #8
2025-10-29 14:26:14 +01:00
f863808940 fix: update .gitignore to include ruff cache and clarify act runner files
All checks were successful
CI / test (pull_request) Successful in 2m21s
2025-10-29 14:23:24 +01:00
37646b571a fix: update system dependencies in CI workflow
Some checks failed
CI / test (pull_request) Failing after 2m51s
2025-10-29 13:57:22 +01:00
22f43bed56 fix: update CI workflow to configure apt-cacher-ng and install system dependencies
All checks were successful
CI / test (pull_request) Successful in 3m26s
2025-10-29 13:54:41 +01:00
72cf06a31d feat: add step to install Playwright browsers in CI workflow
Some checks failed
CI / test (pull_request) Failing after 1m10s
2025-10-29 13:39:02 +01:00
b796a053d6 fix: update database host in CI workflow to use service name
Some checks failed
CI / test (pull_request) Failing after 19s
2025-10-29 13:30:56 +01:00
04d7f202b6 Add UI and styling documentation; remove idempotency and logging audits
Some checks failed
CI / test (pull_request) Failing after 1m8s
- Introduced a new document outlining UI structure, reusable template components, CSS variable conventions, and per-page data/actions for the CalMiner application.
- Removed outdated idempotency audit and logging audit documents as they are no longer relevant.
- Updated quickstart guide to streamline developer setup instructions and link to relevant documentation.
- Created a roadmap document detailing scenario enhancements and data management strategies.
- Deleted the seed data plan document to consolidate information into the setup process.
- Refactored setup_database.py for improved logging and error handling during database setup and migration processes.
2025-10-29 13:20:44 +01:00
1f58de448c fix: container/compose/CI overhaul 2025-10-28 18:42:37 +01:00
807204869f fix: Improve database connection retry logic with detailed error messages
All checks were successful
Run Tests / Lint (push) Successful in 35s
Run Tests / Unit Tests (push) Successful in 47s
2025-10-28 15:04:52 +01:00
ddb23b1da0 fix: Update deployment script to use fallback branch for image tagging 2025-10-28 15:03:21 +01:00
26e231d63f Merge pull request 'fix: Enhance workflow conditions for E2E tests and deployment processes' (#7) from fest/ci-improvement into main
All checks were successful
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 42s
Reviewed-on: #7
2025-10-28 14:44:14 +01:00
d98d6ebe83 fix: Enhance workflow conditions for E2E tests and deployment processes
All checks were successful
Run E2E Tests / E2E Tests (push) Successful in 1m16s
Run Tests / Lint (push) Successful in 35s
Run Tests / Unit Tests (push) Successful in 42s
2025-10-28 14:41:00 +01:00
e881be52b5 Merge pull request 'feat/ci-improvement' (#6) from fest/ci-improvement into main
All checks were successful
Run E2E Tests / E2E Tests (push) Successful in 1m16s
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 41s
Reviewed-on: #6
2025-10-28 14:16:47 +01:00
cc8efa3eab Merge https://git.allucanget.biz/allucanget/calminer into fest/ci-improvement
All checks were successful
Run E2E Tests / E2E Tests (push) Successful in 1m17s
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 43s
2025-10-28 14:12:32 +01:00
29a17595da fix: Update E2E test workflow conditions and branch ignore settings
Some checks failed
Run Tests / Lint (push) Has been cancelled
Run Tests / Unit Tests (push) Has been cancelled
Run E2E Tests / E2E Tests (push) Has been cancelled
2025-10-28 14:11:36 +01:00
a0431cb630 Merge pull request 'refactor: Update workflow triggers for E2E tests and deployment processes' (#5) from fest/ci-improvement into main
All checks were successful
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 42s
Reviewed-on: #5
2025-10-28 13:55:34 +01:00
f1afcaa78b Merge https://git.allucanget.biz/allucanget/calminer into fest/ci-improvement
All checks were successful
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 42s
2025-10-28 13:54:07 +01:00
36da0609ed refactor: Update workflow triggers for E2E tests and deployment processes
All checks were successful
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 42s
2025-10-28 13:23:25 +01:00
26843104ee fix: Update workflow names and conditions for E2E tests
All checks were successful
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 42s
2025-10-28 11:26:41 +01:00
eb509e3dd2 Merge pull request 'feat/ci-improvement' (#4) from fest/ci-improvement into main
All checks were successful
Run E2E Tests / E2E Tests (push) Successful in 1m16s
Run Tests / Lint (push) Successful in 38s
Run Tests / Unit Tests (push) Successful in 42s
Reviewed-on: #4
2025-10-28 09:07:57 +01:00
51aa2fa71d Merge branch 'main' into fest/ci-improvement
All checks were successful
Run E2E Tests / E2E Tests (push) Successful in 1m16s
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 44s
Run E2E Tests / E2E Tests (pull_request) Successful in 1m21s
2025-10-28 09:00:05 +01:00
e1689c3a31 fix: Update pydantic version constraint in requirements.txt
All checks were successful
Run E2E Tests / E2E Tests (push) Successful in 1m16s
Run Tests / Lint (push) Successful in 52s
Run Tests / Unit Tests (push) Successful in 41s
Run E2E Tests / E2E Tests (pull_request) Successful in 1m16s
2025-10-28 08:52:37 +01:00
99d9ea7770 fix: Downgrade upload-artifact action to v3 for consistency
Some checks failed
Run E2E Tests / E2E Tests (push) Successful in 3m48s
Run Tests / Lint (push) Successful in 1m18s
Run Tests / Unit Tests (push) Failing after 57s
2025-10-28 08:34:27 +01:00
2136dbdd44 fix: Ensure bash shell is explicitly set for running E2E tests
Some checks failed
Run E2E Tests / E2E Tests (push) Failing after 1m47s
Run Tests / Lint (push) Successful in 50s
Run Tests / Unit Tests (push) Successful in 1m11s
2025-10-28 08:29:12 +01:00
3da8a50ac4 feat: Add E2E testing workflow with Playwright and PostgreSQL service
Some checks failed
Run E2E Tests / E2E Tests (push) Failing after 5m12s
Run Tests / Lint (push) Successful in 37s
Run Tests / Unit Tests (push) Successful in 44s
2025-10-28 08:19:07 +01:00
a772960390 feat: Add option to create isolated virtual environment in Python setup action
All checks were successful
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 42s
Run Tests / E2E Tests (push) Successful in 12m58s
2025-10-28 07:56:24 +01:00
89a4f663b5 feat: Add virtual environment creation step for Python setup
Some checks failed
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 42s
Run Tests / E2E Tests (push) Failing after 9m29s
2025-10-28 07:42:25 +01:00
50446c4248 feat: Refactor test workflow to separate lint, unit, and e2e jobs with health checks for PostgreSQL service
Some checks failed
Run Tests / Lint (push) Failing after 4s
Run Tests / Unit Tests (push) Failing after 5s
Run Tests / E2E Tests (push) Successful in 8m42s
2025-10-28 06:49:22 +01:00
c5a9a7c96f fix: Remove conditional execution for Node.js runtime installation in test workflow
All checks were successful
Run Tests / e2e tests (push) Successful in 1m17s
Run Tests / lint tests (push) Successful in 1m49s
Run Tests / unit tests (push) Successful in 55s
2025-10-27 22:07:31 +01:00
300ecebe23 Merge pull request 'fest/ci-improvement' (#3) from fest/ci-improvement into main
All checks were successful
Run Tests / e2e tests (push) Successful in 1m48s
Run Tests / unit tests (push) Successful in 10s
Reviewed-on: #3
2025-10-25 22:03:20 +02:00
29 changed files with 628 additions and 1425 deletions

View File

@@ -10,6 +10,8 @@ venv/
.vscode .vscode
.git .git
.gitignore .gitignore
.gitea
.github
.DS_Store .DS_Store
dist dist
build build
@@ -17,5 +19,9 @@ build
*.sqlite3 *.sqlite3
.env .env
.env.* .env.*
.Dockerfile coverage/
.dockerignore logs/
backups/
tests/e2e/artifacts/
scripts/__pycache__/
reports/

View File

@@ -1,132 +0,0 @@
name: Setup Python Environment
description: Configure Python, proxies, dependencies, and optional database setup for CI jobs.
author: CalMiner Team
inputs:
python-version:
description: Python version to install.
required: false
default: '3.10'
use-system-python:
description: Skip setup-python and rely on the system Python already available in the environment.
required: false
default: 'false'
install-playwright:
description: Install Playwright browsers when true.
required: false
default: 'false'
install-requirements:
description: Space-delimited list of requirement files to install.
required: false
default: 'requirements.txt requirements-test.txt'
run-db-setup:
description: Run database wait and setup scripts when true.
required: false
default: 'true'
db-dry-run:
description: Execute setup script dry run before live run when true.
required: false
default: 'true'
runs:
using: composite
steps:
- name: Set up Python
if: ${{ inputs.use-system-python != 'true' }}
uses: actions/setup-python@v5
with:
python-version: ${{ inputs.python-version }}
- name: Verify system Python
if: ${{ inputs.use-system-python == 'true' }}
shell: bash
run: |
set -euo pipefail
if ! command -v python >/dev/null 2>&1; then
echo "Python executable not found on PATH" >&2
exit 1
fi
python --version
python -m pip --version >/dev/null 2>&1 || python -m ensurepip --upgrade
python -m pip --version
- name: Configure apt proxy
shell: bash
run: |
set -euo pipefail
PROXY_HOST="http://apt-cacher:3142"
if ! curl -fsS --connect-timeout 3 "${PROXY_HOST}" >/dev/null; then
PROXY_HOST="http://192.168.88.14:3142"
fi
echo "Using APT proxy ${PROXY_HOST}"
{
echo "http_proxy=${PROXY_HOST}"
echo "https_proxy=${PROXY_HOST}"
echo "HTTP_PROXY=${PROXY_HOST}"
echo "HTTPS_PROXY=${PROXY_HOST}"
} >> "$GITHUB_ENV"
if command -v sudo >/dev/null 2>&1; then
printf 'Acquire::http::Proxy "%s";\nAcquire::https::Proxy "%s";\n' "${PROXY_HOST}" "${PROXY_HOST}" | sudo tee /etc/apt/apt.conf.d/01proxy >/dev/null
elif [ "$(id -u)" -eq 0 ]; then
printf 'Acquire::http::Proxy "%s";\nAcquire::https::Proxy "%s";\n' "${PROXY_HOST}" "${PROXY_HOST}" > /etc/apt/apt.conf.d/01proxy
else
echo "Skipping /etc/apt/apt.conf.d/01proxy update; sudo/root not available" >&2
fi
- name: Install dependencies
shell: bash
run: |
set -euo pipefail
requirements="${{ inputs.install-requirements }}"
if [ -n "${requirements}" ]; then
for requirement in ${requirements}; do
if [ -f "${requirement}" ]; then
python -m pip install -r "${requirement}"
else
echo "Requirement file ${requirement} not found" >&2
exit 1
fi
done
fi
- name: Install Playwright browsers
if: ${{ inputs.install-playwright == 'true' }}
shell: bash
run: |
set -euo pipefail
python -m playwright install --with-deps
- name: Wait for database service
if: ${{ inputs.run-db-setup == 'true' }}
shell: bash
run: |
set -euo pipefail
python - <<'PY'
import os
import time
import psycopg2
dsn = (
f"dbname={os.environ['DATABASE_SUPERUSER_DB']} "
f"user={os.environ['DATABASE_SUPERUSER']} "
f"password={os.environ['DATABASE_SUPERUSER_PASSWORD']} "
f"host={os.environ['DATABASE_HOST']} "
f"port={os.environ['DATABASE_PORT']}"
)
for attempt in range(30):
try:
with psycopg2.connect(dsn):
break
except psycopg2.OperationalError:
time.sleep(2)
else:
raise SystemExit("Postgres service did not become available")
PY
- name: Run database setup (dry run)
if: ${{ inputs.run-db-setup == 'true' && inputs.db-dry-run == 'true' }}
shell: bash
run: |
set -euo pipefail
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v
- name: Run database setup
if: ${{ inputs.run-db-setup == 'true' }}
shell: bash
run: |
set -euo pipefail
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v

View File

@@ -1,78 +0,0 @@
name: Build and Push Docker Image
on:
workflow_run:
workflows:
- Run Tests
branches:
- main
types:
- completed
jobs:
build-and-push:
if: ${{ github.event.workflow_run.conclusion == 'success' }}
runs-on: ubuntu-latest
env:
DEFAULT_BRANCH: main
REGISTRY_ORG: allucanget
REGISTRY_IMAGE_NAME: calminer
REGISTRY_URL: ${{ secrets.REGISTRY_URL }}
REGISTRY_USERNAME: ${{ secrets.REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.REGISTRY_PASSWORD }}
WORKFLOW_RUN_HEAD_BRANCH: ${{ github.event.workflow_run.head_branch }}
WORKFLOW_RUN_HEAD_SHA: ${{ github.event.workflow_run.head_sha }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Collect workflow metadata
id: meta
shell: bash
run: |
ref_name="${GITHUB_REF_NAME:-${GITHUB_REF##*/}}"
event_name="${GITHUB_EVENT_NAME:-}"
sha="${GITHUB_SHA:-}"
if [ -z "$ref_name" ] && [ -n "${WORKFLOW_RUN_HEAD_BRANCH:-}" ]; then
ref_name="${WORKFLOW_RUN_HEAD_BRANCH}"
fi
if [ -z "$sha" ] && [ -n "${WORKFLOW_RUN_HEAD_SHA:-}" ]; then
sha="${WORKFLOW_RUN_HEAD_SHA}"
fi
if [ "$ref_name" = "${DEFAULT_BRANCH:-main}" ]; then
echo "on_default=true" >> "$GITHUB_OUTPUT"
else
echo "on_default=false" >> "$GITHUB_OUTPUT"
fi
echo "ref_name=$ref_name" >> "$GITHUB_OUTPUT"
echo "event_name=$event_name" >> "$GITHUB_OUTPUT"
echo "sha=$sha" >> "$GITHUB_OUTPUT"
- name: Set up QEMU and Buildx
uses: docker/setup-buildx-action@v3
with:
install: false
- name: Log in to Gitea registry
if: ${{ steps.meta.outputs.on_default == 'true' }}
uses: docker/login-action@v3
continue-on-error: true
with:
registry: ${{ env.REGISTRY_URL }}
username: ${{ env.REGISTRY_USERNAME }}
password: ${{ env.REGISTRY_PASSWORD }}
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: .
file: Dockerfile
push: ${{ steps.meta.outputs.on_default == 'true' && steps.meta.outputs.event_name != 'pull_request' && (env.REGISTRY_URL != '' && env.REGISTRY_USERNAME != '' && env.REGISTRY_PASSWORD != '') }}
tags: |
${{ env.REGISTRY_URL }}/${{ env.REGISTRY_ORG }}/${{ env.REGISTRY_IMAGE_NAME }}:latest
${{ env.REGISTRY_URL }}/${{ env.REGISTRY_ORG }}/${{ env.REGISTRY_IMAGE_NAME }}:${{ steps.meta.outputs.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max

74
.gitea/workflows/ci.yml Normal file
View File

@@ -0,0 +1,74 @@
name: CI
on:
push:
branches: [main, develop]
pull_request:
branches: [main, develop]
jobs:
test:
env:
APT_CACHER_NG: http://192.168.88.14:3142
DB_DRIVER: postgresql+psycopg2
DB_HOST: 192.168.88.35
DB_NAME: calminer_test
DB_USER: calminer
DB_PASSWORD: calminer_password
runs-on: ubuntu-latest
services:
postgres:
image: postgres:17
env:
POSTGRES_USER: ${ { env.DB_USER } }
POSTGRES_PASSWORD: ${ { env.DB_PASSWORD } }
POSTGRES_DB: ${ { env.DB_NAME } }
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Update apt-cacher-ng config
run: |-
echo 'Acquire::http::Proxy "{{ env.APT_CACHER_NG }}";' | tee /etc/apt/apt.conf.d/01apt-cacher-ng
apt-get update
- name: Update system packages
run: apt-get upgrade -y
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-test.txt
- name: Install Playwright system dependencies
run: playwright install-deps
- name: Install Playwright browsers
run: playwright install
- name: Run tests
env:
DATABASE_DRIVER: ${ { env.DB_DRIVER } }
DATABASE_HOST: ${ { env.DB_HOST } }
DATABASE_PORT: 5432
DATABASE_USER: ${ { env.DB_USER } }
DATABASE_PASSWORD: ${ { env.DB_PASSWORD } }
DATABASE_NAME: ${ { env.DB_NAME } }
run: |
pytest tests/ --cov=.
- name: Build Docker image
run: |
docker build -t calminer .

View File

@@ -1,64 +0,0 @@
name: Deploy to Server
on:
workflow_run:
workflows:
- Build and Push Docker Image
branches:
- main
types:
- completed
jobs:
deploy:
if: ${{ github.event.workflow_run.conclusion == 'success' }}
runs-on: ubuntu-latest
env:
DEFAULT_BRANCH: main
REGISTRY_ORG: allucanget
REGISTRY_IMAGE_NAME: calminer
REGISTRY_URL: ${{ secrets.REGISTRY_URL }}
REGISTRY_USERNAME: ${{ secrets.REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.REGISTRY_PASSWORD }}
WORKFLOW_RUN_HEAD_BRANCH: ${{ github.event.workflow_run.head_branch }}
WORKFLOW_RUN_HEAD_SHA: ${{ github.event.workflow_run.head_sha }}
steps:
- name: SSH and deploy
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.SSH_HOST }}
username: ${{ secrets.SSH_USERNAME }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
IMAGE_SHA="${{ env.WORKFLOW_RUN_HEAD_SHA }}"
IMAGE_PATH="${{ env.REGISTRY_URL }}/${{ env.REGISTRY_ORG }}/${{ env.REGISTRY_IMAGE_NAME }}"
if [ -z "$IMAGE_SHA" ]; then
echo "Missing workflow run head SHA; aborting deployment." >&2
exit 1
fi
docker pull "$IMAGE_PATH:$IMAGE_SHA"
docker stop calminer || true
docker rm calminer || true
docker run -d --name calminer -p 8000:8000 \
-e DATABASE_DRIVER=${{ secrets.DATABASE_DRIVER }} \
-e DATABASE_HOST=${{ secrets.DATABASE_HOST }} \
-e DATABASE_PORT=${{ secrets.DATABASE_PORT }} \
-e DATABASE_USER=${{ secrets.DATABASE_USER }} \
-e DATABASE_PASSWORD=${{ secrets.DATABASE_PASSWORD }} \
-e DATABASE_NAME=${{ secrets.DATABASE_NAME }} \
-e DATABASE_SCHEMA=${{ secrets.DATABASE_SCHEMA }} \
"$IMAGE_PATH:$IMAGE_SHA"
for attempt in {1..10}; do
if curl -fsS http://localhost:8000/health >/dev/null; then
echo "Deployment health check passed"
exit 0
fi
echo "Health check attempt ${attempt} failed; retrying in 3s"
sleep 3
done
echo "Deployment health check failed after retries" >&2
docker logs calminer >&2 || true
exit 1

View File

@@ -1,79 +0,0 @@
name: Run Tests
on: [push]
jobs:
tests:
name: ${{ matrix.target }} tests
runs-on: ubuntu-latest
container: mcr.microsoft.com/playwright/python:v1.55.0-jammy
env:
DATABASE_DRIVER: postgresql
DATABASE_HOST: postgres
DATABASE_PORT: '5432'
DATABASE_NAME: calminer_ci
DATABASE_USER: calminer
DATABASE_PASSWORD: secret
DATABASE_SCHEMA: public
DATABASE_SUPERUSER: calminer
DATABASE_SUPERUSER_PASSWORD: secret
DATABASE_SUPERUSER_DB: calminer_ci
DATABASE_URL: postgresql+psycopg2://calminer:secret@postgres:5432/calminer_ci
strategy:
fail-fast: false
matrix:
target: [unit, e2e, lint]
services:
postgres:
image: postgres:16
env:
POSTGRES_DB: calminer_ci
POSTGRES_USER: calminer
POSTGRES_PASSWORD: secret
options: >-
--health-cmd "pg_isready -U calminer -d calminer_ci"
--health-interval 10s
--health-timeout 5s
--health-retries 10
steps:
- name: Install Node.js runtime
if: ${{ matrix.target == 'e2e' }}
shell: bash
run: |
set -euo pipefail
export DEBIAN_FRONTEND=noninteractive
curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
apt-get install -y nodejs
- name: Checkout code
uses: actions/checkout@v4
- name: Export PYTHONPATH
shell: bash
run: |
set -euo pipefail
echo "PYTHONPATH=/workspace/allucanget/calminer" >> "$GITHUB_ENV"
# - name: Cache pip dependencies
# uses: actions/cache@v4
# with:
# path: ~/.cache/pip
# key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt', 'requirements-test.txt') }}
# restore-keys: |
# ${{ runner.os }}-pip-
- name: Prepare Python environment
uses: ./.gitea/actions/setup-python-env
with:
install-playwright: ${{ matrix.target == 'e2e' }}
use-system-python: 'true'
run-db-setup: ${{ matrix.target != 'lint' }}
- name: Run tests
run: |
if [ "${{ matrix.target }}" = "unit" ]; then
pytest tests/unit
elif [ "${{ matrix.target }}" = "lint" ]; then
ruff check .
else
pytest tests/e2e
fi

5
.gitignore vendored
View File

@@ -38,6 +38,9 @@ htmlcov/
# Mypy cache # Mypy cache
.mypy_cache/ .mypy_cache/
# Linting cache
.ruff_cache/
# Logs # Logs
*.log *.log
logs/ logs/
@@ -46,5 +49,5 @@ logs/
*.sqlite3 *.sqlite3
test*.db test*.db
# Docker files # Act runner files
.runner .runner

View File

@@ -1,35 +1,111 @@
# Multi-stage Dockerfile to keep final image small # syntax=docker/dockerfile:1.7
FROM python:3.10-slim AS builder
# Install build-time packages and Python dependencies in one layer ARG PYTHON_VERSION=3.11-slim
WORKDIR /app ARG APT_CACHE_URL=http://192.168.88.14:3142
COPY requirements.txt /app/requirements.txt
RUN echo 'Acquire::http::Proxy "http://192.168.88.14:3142";' > /etc/apt/apt.conf.d/90proxy FROM python:${PYTHON_VERSION} AS builder
RUN apt-get update \ ARG APT_CACHE_URL
&& apt-get install -y --no-install-recommends build-essential gcc libpq-dev \
&& python -m pip install --upgrade pip \ ENV \
&& pip install --no-cache-dir --prefix=/install -r /app/requirements.txt \ PIP_DISABLE_PIP_VERSION_CHECK=1 \
&& apt-get purge -y --auto-remove build-essential gcc \ PIP_NO_CACHE_DIR=1 \
&& rm -rf /var/lib/apt/lists/* PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
FROM python:3.10-slim
WORKDIR /app WORKDIR /app
# Copy installed packages from builder COPY requirements.txt ./requirements.txt
COPY --from=builder /install /usr/local
# Assume environment variables for DB config will be set at runtime RUN --mount=type=cache,target=/root/.cache/pip /bin/bash <<'EOF'
# ENV DATABASE_HOST=your_db_host set -e
# ENV DATABASE_PORT=your_db_port
# ENV DATABASE_NAME=your_db_name python3 <<'PY'
# ENV DATABASE_USER=your_db_user import os, socket, urllib.parse
# ENV DATABASE_PASSWORD=your_db_password
url = os.environ.get('APT_CACHE_URL', '').strip()
if url:
parsed = urllib.parse.urlparse(url)
host = parsed.hostname
port = parsed.port or (80 if parsed.scheme == 'http' else 443)
if host:
sock = socket.socket()
sock.settimeout(1)
try:
sock.connect((host, port))
except OSError:
pass
else:
with open('/etc/apt/apt.conf.d/01proxy', 'w', encoding='utf-8') as fh:
fh.write(f"Acquire::http::Proxy \"{url}\";\n")
fh.write(f"Acquire::https::Proxy \"{url}\";\n")
finally:
sock.close()
PY
apt-get update
apt-get install -y --no-install-recommends build-essential gcc libpq-dev
pip install --upgrade pip
pip wheel --no-deps --wheel-dir /wheels -r requirements.txt
apt-get purge -y --auto-remove build-essential gcc
rm -rf /var/lib/apt/lists/*
EOF
FROM python:${PYTHON_VERSION} AS runtime
ARG APT_CACHE_URL
ENV \
PIP_DISABLE_PIP_VERSION_CHECK=1 \
PIP_NO_CACHE_DIR=1 \
PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \
PATH="/home/appuser/.local/bin:${PATH}"
WORKDIR /app
RUN groupadd --system app && useradd --system --create-home --gid app appuser
RUN /bin/bash <<'EOF'
set -e
python3 <<'PY'
import os, socket, urllib.parse
url = os.environ.get('APT_CACHE_URL', '').strip()
if url:
parsed = urllib.parse.urlparse(url)
host = parsed.hostname
port = parsed.port or (80 if parsed.scheme == 'http' else 443)
if host:
sock = socket.socket()
sock.settimeout(1)
try:
sock.connect((host, port))
except OSError:
pass
else:
with open('/etc/apt/apt.conf.d/01proxy', 'w', encoding='utf-8') as fh:
fh.write(f"Acquire::http::Proxy \"{url}\";\n")
fh.write(f"Acquire::https::Proxy \"{url}\";\n")
finally:
sock.close()
PY
apt-get update
apt-get install -y --no-install-recommends libpq5
rm -rf /var/lib/apt/lists/*
EOF
COPY --from=builder /wheels /wheels
COPY --from=builder /app/requirements.txt /tmp/requirements.txt
RUN pip install --upgrade pip \
&& pip install --no-cache-dir --find-links=/wheels -r /tmp/requirements.txt \
&& rm -rf /wheels /tmp/requirements.txt
# Copy application code
COPY . /app COPY . /app
# Expose service port RUN chown -R appuser:app /app
EXPOSE 8000
# Run the FastAPI app with uvicorn USER appuser
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
EXPOSE 8003
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8003", "--workers", "4"]

101
README.md
View File

@@ -6,24 +6,32 @@ Focuses on ore mining operations and covering parameters such as capital and ope
The system is designed to help mining companies make informed decisions by simulating various scenarios and analyzing potential outcomes based on stochastic variables. The system is designed to help mining companies make informed decisions by simulating various scenarios and analyzing potential outcomes based on stochastic variables.
A range of features are implemented to support these functionalities. ## Current Features
## Features > [!TIP]
> TODO: Update this section to reflect the current feature set.
| Feature | Category | Description | Status |
| ---------------------- | ----------- | ------------------------------------------------------------------------------------ | ----------- |
| Scenario Management | Core | Manage multiple mining scenarios with independent parameter sets and outputs. | Done |
| Parameter Definition | Core | Define and manage various parameters for each scenario. | Done |
| Cost Tracking | Financial | Capture and analyze capital and operational expenditures. | Done |
| Consumption Tracking | Operational | Record resource consumption tied to scenarios. | Done |
| Production Output | Operational | Store and analyze production metrics such as tonnage, recovery, and revenue drivers. | Done |
| Equipment Management | Operational | Manage equipment inventories and specifications for each scenario. | Done |
| Maintenance Logging | Operational | Log maintenance events and costs associated with equipment. | Started |
| Reporting Dashboard | Analytics | View aggregated statistics and visualizations for scenario outputs. | In Progress |
| Monte Carlo Simulation | Analytics | Run stochastic simulations to assess risk and variability in outcomes. | Started |
| Application Settings | Core | Manage global application settings such as themes and currency options. | Done |
## Key UI/UX Features
- **Scenario Management**: Manage multiple mining scenarios with independent parameter sets and outputs.
- **Process Parameters**: Define and persist process inputs via FastAPI endpoints and template-driven forms.
- **Cost Tracking**: Capture capital (`capex`) and operational (`opex`) expenditures per scenario.
- **Consumption Tracking**: Record resource consumption (chemicals, fuel, water, scrap) tied to scenarios.
- **Production Output**: Store production metrics such as tonnage, recovery, and revenue drivers.
- **Equipment Management**: Register scenario-specific equipment inventories.
- **Maintenance Logging**: Log maintenance events against equipment with dates and costs.
- **Reporting Dashboard**: Surface aggregated statistics for simulation outputs with an interactive Chart.js dashboard.
- **Unified UI Shell**: Server-rendered templates extend a shared base layout with a persistent left sidebar linking scenarios, parameters, costs, consumption, production, equipment, maintenance, simulations, and reporting views. - **Unified UI Shell**: Server-rendered templates extend a shared base layout with a persistent left sidebar linking scenarios, parameters, costs, consumption, production, equipment, maintenance, simulations, and reporting views.
- **Operations Overview Dashboard**: The root route (`/`) surfaces cross-scenario KPIs, charts, and maintenance reminders with a one-click refresh backed by aggregated loaders.
- **Theming Tokens**: Shared CSS variables in `static/css/main.css` centralize the UI color palette for consistent styling and rapid theme tweaks.
- **Settings Center**: The Settings landing page exposes visual theme controls and links to currency administration, backed by persisted application settings and environment overrides.
- **Modular Frontend Scripts**: Page-specific interactions in `static/js/` modules, keeping templates lean while enabling browser caching and reuse. - **Modular Frontend Scripts**: Page-specific interactions in `static/js/` modules, keeping templates lean while enabling browser caching and reuse.
- **Monte Carlo Simulation (in progress)**: Services and routes are scaffolded for future stochastic analysis.
## Planned Features
See [Roadmap](docs/roadmap.md) for details on planned features and enhancements.
## Documentation & quickstart ## Documentation & quickstart
@@ -45,59 +53,52 @@ The repository ships with a multi-stage `Dockerfile` that produces a slim runtim
### Build container ### Build container
```powershell ```bash
# Build the image locally docker build -t calminer .
docker build -t calminer:latest .
``` ```
### Push to registry ### Push to registry
```powershell To push the image to a registry, tag it appropriately and push:
# Tag and push the image to your registry
docker login your-registry.com -u your-username -p your-password ```bash
docker tag calminer:latest your-registry.com/your-namespace/calminer:latest docker tag calminer your-registry/calminer:latest
docker push your-registry.com/your-namespace/calminer:latest docker push your-registry/calminer:latest
``` ```
### Run container ### Run container
Expose FastAPI on <http://localhost:8000> with database configuration via granular environment variables: To run the container, ensure PostgreSQL is available and set environment variables:
```powershell ```bash
# Provide database configuration via granular environment variables docker run -p 8000:8000 \
docker run --rm -p 8000:8000 ^ -e DATABASE_HOST=your-postgres-host \
-e DATABASE_DRIVER="postgresql" ^ -e DATABASE_PORT=5432 \
-e DATABASE_HOST="db.host" ^ -e DATABASE_USER=calminer \
-e DATABASE_PORT="5432" ^ -e DATABASE_PASSWORD=your-password \
-e DATABASE_USER="calminer" ^ -e DATABASE_NAME=calminer_db \
-e DATABASE_PASSWORD="s3cret" ^ calminer
-e DATABASE_NAME="calminer" ^
-e DATABASE_SCHEMA="public" ^
calminer:latest
``` ```
### Orchestrated Deployment ## Development with Docker Compose
Use `docker compose` or an orchestrator of your choice to co-locate PostgreSQL/Redis/Traefik alongside the app when needed. The image expects migrations to be applied before startup. For local development, use `docker-compose.yml` which includes the app and PostgreSQL services.
### Production docker-compose workflow ```bash
# Start services
docker-compose up
`docker-compose.prod.yml` covers the API plus optional Traefik (`reverse-proxy` profile) and on-host Postgres (`local-db` profile). Commands, health checks, and environment variables are documented in [docs/quickstart.md](docs/quickstart.md#compose-driven-production-stack) and expanded in [docs/architecture/07_deployment_view.md](docs/architecture/07_deployment_view.md). # Or run in background
docker-compose up -d
### Development docker-compose workflow # Stop services
docker-compose down
```
`docker-compose.dev.yml` runs FastAPI (with reload) and Postgres in a single stack. See [docs/quickstart.md](docs/quickstart.md#compose-driven-development-stack) for lifecycle commands and troubleshooting, plus the architecture chapter ([docs/architecture/15_development_setup.md](docs/architecture/15_development_setup.md)) for deeper context. The app will be available at `http://localhost:8000`, PostgreSQL at `localhost:5432`.
### Test docker-compose workflow ## CI/CD
`docker-compose.test.yml` mirrors the CI pipeline: it provisions Postgres, runs the database bootstrap script, and executes pytest. Usage examples live in [docs/quickstart.md](docs/quickstart.md#compose-driven-test-stack).
## CI/CD expectations
CalMiner uses Gitea Actions workflows stored in `.gitea/workflows/`: CalMiner uses Gitea Actions workflows stored in `.gitea/workflows/`:
- `test.yml` runs style/unit/e2e suites on every push with cached Python dependencies. - `ci.yml`: Runs on push and PR to main/develop branches. Sets up Python, installs dependencies, runs tests with coverage, and builds the Docker image.
- `build-and-push.yml` builds the Docker image, reuses cached layers, and pushes to the configured registry.
- `deploy.yml` pulls the pushed image on the target host and restarts the container.
Pipelines assume the following secrets are provisioned in the Gitea instance: `REGISTRY_USERNAME`, `REGISTRY_PASSWORD`, `REGISTRY_URL`, `SSH_HOST`, `SSH_USERNAME`, and `SSH_PRIVATE_KEY`.

View File

@@ -1,50 +0,0 @@
services:
api:
build:
context: .
dockerfile: Dockerfile
command: uvicorn main:app --host 0.0.0.0 --port 8000 --reload
ports:
- "8000:8000"
environment:
- DATABASE_HOST=db
- DATABASE_PORT=5432
- DATABASE_USER=calminer
- DATABASE_PASSWORD=calminer
- DATABASE_NAME=calminer_dev
volumes:
- .:/app
depends_on:
db:
condition: service_healthy
networks:
- calminer_backend
db:
image: postgres:16
restart: unless-stopped
environment:
- POSTGRES_DB=calminer_dev
- POSTGRES_USER=calminer
- POSTGRES_PASSWORD=calminer
- LANG=en_US.UTF-8
- LC_ALL=en_US.UTF-8
- POSTGRES_INITDB_ARGS=--encoding=UTF8 --locale=en_US.UTF-8
ports:
- "5432:5432"
volumes:
- pg_data_dev:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U calminer -d calminer_dev"]
interval: 10s
timeout: 5s
retries: 5
networks:
- calminer_backend
networks:
calminer_backend:
driver: bridge
volumes:
pg_data_dev:

View File

@@ -1,23 +0,0 @@
version: "3.9"
services:
postgres:
image: postgres:16-alpine
container_name: calminer_postgres_local
restart: unless-stopped
environment:
POSTGRES_DB: calminer_local
POSTGRES_USER: calminer
POSTGRES_PASSWORD: secret
ports:
- "5433:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U calminer -d calminer_local"]
interval: 10s
timeout: 5s
retries: 10
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:

View File

@@ -1,130 +0,0 @@
services:
api:
image: ${CALMINER_IMAGE:-calminer-api:latest}
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
env_file:
- config/setup_production.env
environment:
UVICORN_WORKERS: ${UVICORN_WORKERS:-2}
UVICORN_LOG_LEVEL: ${UVICORN_LOG_LEVEL:-info}
command:
[
"sh",
"-c",
"uvicorn main:app --host 0.0.0.0 --port 8000 --workers ${UVICORN_WORKERS:-2} --log-level ${UVICORN_LOG_LEVEL:-info}",
]
ports:
- "${CALMINER_API_PORT:-8000}:8000"
deploy:
resources:
limits:
cpus: ${API_LIMIT_CPUS:-1.0}
memory: ${API_LIMIT_MEMORY:-1g}
reservations:
memory: ${API_RESERVATION_MEMORY:-512m}
healthcheck:
test:
- "CMD-SHELL"
- 'python -c "import urllib.request; urllib.request.urlopen(''http://127.0.0.1:8000/health'').read()"'
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
networks:
- calminer_backend
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
labels:
- "traefik.enable=true"
- "traefik.http.routers.calminer.rule=Host(`${CALMINER_DOMAIN}`)"
- "traefik.http.routers.calminer.entrypoints=websecure"
- "traefik.http.routers.calminer.tls.certresolver=letsencrypt"
- "traefik.http.services.calminer.loadbalancer.server.port=8000"
traefik:
image: traefik:v3.1
restart: unless-stopped
command:
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
- "--certificatesresolvers.letsencrypt.acme.email=${TRAEFIK_ACME_EMAIL:?TRAEFIK_ACME_EMAIL not set}"
- "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
ports:
- "80:80"
- "443:443"
deploy:
resources:
limits:
cpus: ${TRAEFIK_LIMIT_CPUS:-0.5}
memory: ${TRAEFIK_LIMIT_MEMORY:-512m}
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- traefik_letsencrypt:/letsencrypt
networks:
- calminer_backend
profiles:
- reverse-proxy
healthcheck:
test:
- "CMD"
- "traefik"
- "healthcheck"
- "--entrypoints=web"
- "--entrypoints=websecure"
interval: 30s
timeout: 10s
retries: 5
postgres:
image: postgres:16
profiles:
- local-db
restart: unless-stopped
environment:
POSTGRES_DB: ${POSTGRES_DB:-calminer}
POSTGRES_USER: ${POSTGRES_USER:-calminer}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-changeme}
LANG: en_US.UTF-8
LC_ALL: en_US.UTF-8
POSTGRES_INITDB_ARGS: --encoding=UTF8 --locale=en_US.UTF-8
ports:
- "${CALMINER_DB_PORT:-5432}:5432"
deploy:
resources:
limits:
cpus: ${POSTGRES_LIMIT_CPUS:-1.0}
memory: ${POSTGRES_LIMIT_MEMORY:-2g}
reservations:
memory: ${POSTGRES_RESERVATION_MEMORY:-1g}
volumes:
- pg_data_prod:/var/lib/postgresql/data
- ./backups:/backups
healthcheck:
test:
[
"CMD-SHELL",
"pg_isready -U ${POSTGRES_USER:-calminer} -d ${POSTGRES_DB:-calminer}",
]
interval: 30s
timeout: 10s
retries: 5
networks:
- calminer_backend
networks:
calminer_backend:
name: ${CALMINER_NETWORK:-calminer_backend}
driver: bridge
volumes:
pg_data_prod:
traefik_letsencrypt:

View File

@@ -1,82 +0,0 @@
services:
tests:
build:
context: .
dockerfile: Dockerfile
command: >
sh -c "set -eu; pip install -r requirements-test.txt; python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v; python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v; pytest $${PYTEST_TARGET:-tests/unit}"
environment:
DATABASE_DRIVER: postgresql
DATABASE_HOST: postgres
DATABASE_PORT: 5432
DATABASE_NAME: calminer_test
DATABASE_USER: calminer_test
DATABASE_PASSWORD: calminer_test_password
DATABASE_SCHEMA: public
DATABASE_SUPERUSER: postgres
DATABASE_SUPERUSER_PASSWORD: postgres
DATABASE_SUPERUSER_DB: postgres
DATABASE_URL: postgresql+psycopg2://calminer_test:calminer_test_password@postgres:5432/calminer_test
PYTEST_TARGET: tests/unit
PYTHONPATH: /app
depends_on:
postgres:
condition: service_healthy
volumes:
- .:/app
- pip_cache_test:/root/.cache/pip
networks:
- calminer_test
api:
build:
context: .
dockerfile: Dockerfile
command: uvicorn main:app --host 0.0.0.0 --port 8000 --reload
environment:
DATABASE_DRIVER: postgresql
DATABASE_HOST: postgres
DATABASE_PORT: 5432
DATABASE_NAME: calminer_test
DATABASE_USER: calminer_test
DATABASE_PASSWORD: calminer_test_password
DATABASE_SCHEMA: public
DATABASE_URL: postgresql+psycopg2://calminer_test:calminer_test_password@postgres:5432/calminer_test
PYTHONPATH: /app
depends_on:
postgres:
condition: service_healthy
ports:
- "8001:8000"
networks:
- calminer_test
postgres:
image: postgres:16
restart: unless-stopped
environment:
POSTGRES_DB: calminer_test
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
LANG: en_US.UTF-8
LC_ALL: en_US.UTF-8
POSTGRES_INITDB_ARGS: --encoding=UTF8 --locale=en_US.UTF-8
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -d calminer_test"]
interval: 10s
timeout: 5s
retries: 5
ports:
- "5433:5432"
volumes:
- pg_data_test:/var/lib/postgresql/data
networks:
- calminer_test
networks:
calminer_test:
driver: bridge
volumes:
pg_data_test:
pip_cache_test:

View File

@@ -1,39 +1,36 @@
version: "3.8"
services: services:
api: app:
image: ${CALMINER_IMAGE:-calminer-api:latest}
build: build:
context: . context: .
dockerfile: Dockerfile dockerfile: Dockerfile
restart: unless-stopped
env_file:
- config/setup_production.env
environment:
UVICORN_WORKERS: ${UVICORN_WORKERS:-2}
UVICORN_LOG_LEVEL: ${UVICORN_LOG_LEVEL:-info}
command:
[
"sh",
"-c",
"uvicorn main:app --host 0.0.0.0 --port 8000 --workers ${UVICORN_WORKERS:-2} --log-level ${UVICORN_LOG_LEVEL:-info}",
]
ports: ports:
- "${CALMINER_API_PORT:-8000}:8000" - "8003:8003"
healthcheck: environment:
test: - DATABASE_HOST=postgres
- "CMD-SHELL" - DATABASE_PORT=5432
- 'python -c "import urllib.request; urllib.request.urlopen(''http://127.0.0.1:8000/docs'').read()"' - DATABASE_USER=calminer
interval: 30s - DATABASE_PASSWORD=calminer_password
timeout: 10s - DATABASE_NAME=calminer_db
retries: 5 - DATABASE_DRIVER=postgresql
start_period: 30s depends_on:
networks: - postgres
- calminer_backend volumes:
logging: - ./logs:/app/logs
driver: json-file restart: unless-stopped
options:
max-size: "10m"
max-file: "3"
networks: postgres:
calminer_backend: image: postgres:17
driver: bridge environment:
- POSTGRES_USER=calminer
- POSTGRES_PASSWORD=calminer_password
- POSTGRES_DB=calminer_db
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
volumes:
postgres_data:

View File

@@ -1,110 +0,0 @@
# Implementation Plan 2025-10-20
This file contains the implementation plan (MVP features, steps, and estimates).
## Project Setup
1. Connect to PostgreSQL database with schema `calminer`.
1. Create and activate a virtual environment and install dependencies via `requirements.txt`.
1. Define database environment variables in `.env` (e.g., `DATABASE_DRIVER`, `DATABASE_HOST`, `DATABASE_PORT`, `DATABASE_USER`, `DATABASE_PASSWORD`, `DATABASE_NAME`, optional `DATABASE_SCHEMA`).
1. Configure FastAPI entrypoint in `main.py` to include routers.
## Feature: Scenario Management
### Scenario Management — Steps
1. Create `models/scenario.py` for scenario CRUD.
1. Implement API endpoints in `routes/scenarios.py` (GET, POST, PUT, DELETE).
1. Write unit tests in `tests/unit/test_scenario.py`.
1. Build UI component `components/ScenarioForm.html`.
## Feature: Process Parameters
### Parameters — Steps
1. Create `models/parameters.py` for process parameters.
1. Implement Pydantic schemas in `routes/parameters.py`.
1. Add validation middleware in `middleware/validation.py`.
1. Write unit tests in `tests/unit/test_parameter.py`.
1. Build UI component `components/ParameterInput.html`.
## Feature: Stochastic Variables
### Stochastic Variables — Steps
1. Create `models/distribution.py` for variable distributions.
1. Implement API routes in `routes/distributions.py`.
1. Write Pydantic schemas and validations.
1. Write unit tests in `tests/unit/test_distribution.py`.
1. Build UI component `components/DistributionEditor.html`.
## Feature: Cost Tracking
### Cost Tracking — Steps
1. Create `models/capex.py` and `models/opex.py`.
1. Implement API routes in `routes/costs.py`.
1. Write Pydantic schemas for CAPEX/OPEX.
1. Write unit tests in `tests/unit/test_costs.py`.
1. Build UI component `components/CostForm.html`.
## Feature: Consumption Tracking
### Consumption Tracking — Steps
1. Create models for consumption: `chemical_consumption.py`, `fuel_consumption.py`, `water_consumption.py`, `scrap_consumption.py`.
1. Implement API routes in `routes/consumption.py`.
1. Write Pydantic schemas for consumption data.
1. Write unit tests in `tests/unit/test_consumption.py`.
1. Build UI component `components/ConsumptionDashboard.html`.
## Feature: Production Output
### Production Output — Steps
1. Create `models/production_output.py`.
1. Implement API routes in `routes/production.py`.
1. Write Pydantic schemas for production output.
1. Write unit tests in `tests/unit/test_production.py`.
1. Build UI component `components/ProductionChart.html`.
## Feature: Equipment Management
### Equipment Management — Steps
1. Create `models/equipment.py` for equipment data.
1. Implement API routes in `routes/equipment.py`.
1. Write Pydantic schemas for equipment.
1. Write unit tests in `tests/unit/test_equipment.py`.
1. Build UI component `components/EquipmentList.html`.
## Feature: Maintenance Logging
### Maintenance Logging — Steps
1. Create `models/maintenance.py` for maintenance events.
1. Implement API routes in `routes/maintenance.py`.
1. Write Pydantic schemas for maintenance logs.
1. Write unit tests in `tests/unit/test_maintenance.py`.
1. Build UI component `components/MaintenanceLog.html`.
## Feature: Monte Carlo Simulation Engine
### Monte Carlo Engine — Steps
1. Implement Monte Carlo logic in `services/simulation.py`.
1. Persist results in `models/simulation_result.py`.
1. Expose endpoint in `routes/simulations.py`.
1. Write integration tests in `tests/unit/test_simulation.py`.
1. Build UI component `components/SimulationRunner.html`.
## Feature: Reporting / Dashboard
### Reporting / Dashboard — Steps
1. Implement report calculations in `services/reporting.py`.
1. Add detailed and summary endpoints in `routes/reporting.py`.
1. Write unit tests in `tests/unit/test_reporting.py`.
1. Enhance UI in `components/Dashboard.html` with charts.
See [UI and Style](../13_ui_and_style.md) for the UI template audit, layout guidance, and next steps.

View File

@@ -21,10 +21,7 @@ CalMiner uses a combination of unit, integration, and end-to-end tests to ensure
### CI/CD ### CI/CD
- Use Gitea Actions for CI/CD; workflows live under `.gitea/workflows/`. - Use Gitea Actions for CI/CD; workflows live under `.gitea/workflows/`.
- `test.yml` runs on every push, provisions a temporary Postgres 16 service, waits for readiness, executes the setup script in dry-run and live modes, then fans out into parallel matrix jobs for unit (`pytest tests/unit`) and end-to-end (`pytest tests/e2e`) suites. Playwright browsers install only for the E2E job. - `ci.yml` runs on push and pull requests to `main` and `develop` branches. It provisions a temporary PostgreSQL 15 service, sets up Python 3.11, installs dependencies from `requirements.txt` and `requirements-test.txt`, runs pytest with coverage on all tests, and builds the Docker image.
- `build-and-push.yml` runs only after the **Run Tests** workflow finishes successfully (triggered via `workflow_run` on `main`). Once tests pass, it builds the Docker image with `docker/build-push-action@v2`, reuses cache-backed layers, and pushes to the Gitea registry.
- `deploy.yml` runs only after the build workflow reports success on `main`. It connects to the target host (via `appleboy/ssh-action`), pulls the Docker image tagged with the build commit SHA, and restarts the container with that exact image reference.
- Mandatory secrets: `REGISTRY_USERNAME`, `REGISTRY_PASSWORD`, `REGISTRY_URL`, `SSH_HOST`, `SSH_USERNAME`, `SSH_PRIVATE_KEY`.
- Run tests on pull requests to shared branches; enforce coverage target ≥80% (pytest-cov). - Run tests on pull requests to shared branches; enforce coverage target ≥80% (pytest-cov).
### Running Tests ### Running Tests
@@ -74,7 +71,7 @@ To run the Playwright tests:
```bash ```bash
pytest tests/e2e/ pytest tests/e2e/
```` ```
To run headed mode: To run headed mode:
@@ -166,7 +163,7 @@ When adding new workflows, mirror this structure to ensure secrets, caching, and
- Usage sketch (in `test.yml`): - Usage sketch (in `test.yml`):
```yaml ```yaml
- name: Prepare Python environment - name: Prepare Python environment
uses: ./.gitea/actions/setup-python-env uses: ./.gitea/actions/setup-python-env
with: with:
install-playwright: ${{ matrix.target == 'e2e' }} install-playwright: ${{ matrix.target == 'e2e' }}

View File

@@ -0,0 +1,82 @@
# Database Deployment
## Migrations & Baseline
A consolidated baseline migration (`scripts/migrations/000_base.sql`) captures all schema changes required for a fresh installation. The script is idempotent: it creates the `currency` and `measurement_unit` reference tables, provisions the `application_setting` store for configurable UI/system options, ensures consumption and production records expose unit metadata, and enforces the foreign keys used by CAPEX and OPEX.
Configure granular database settings in your PowerShell session before running migrations:
```powershell
$env:DATABASE_DRIVER = 'postgresql'
$env:DATABASE_HOST = 'localhost'
$env:DATABASE_PORT = '5432'
$env:DATABASE_USER = 'calminer'
$env:DATABASE_PASSWORD = 's3cret'
$env:DATABASE_NAME = 'calminer'
$env:DATABASE_SCHEMA = 'public'
python scripts/setup_database.py --run-migrations --seed-data --dry-run
python scripts/setup_database.py --run-migrations --seed-data
```
The dry-run invocation reports which steps would execute without making changes. The live run applies the baseline (if not already recorded in `schema_migrations`) and seeds the reference data relied upon by the UI and API.
> When `--seed-data` is supplied without `--run-migrations`, the bootstrap script automatically applies any pending SQL migrations first so the `application_setting` table (and future settings-backed features) are present before seeding.
>
> The application still accepts `DATABASE_URL` as a fallback if the granular variables are not set.
## Database bootstrap workflow
Provision or refresh a database instance with `scripts/setup_database.py`. Populate the required environment variables (an example lives at `config/setup_test.env.example`) and run:
```powershell
# Load test credentials (PowerShell)
Get-Content .\config\setup_test.env.example |
ForEach-Object {
if ($_ -and -not $_.StartsWith('#')) {
$name, $value = $_ -split '=', 2
Set-Item -Path Env:$name -Value $value
}
}
# Dry-run to inspect the planned actions
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v
# Execute the full workflow
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v
```
Typical log output confirms:
- Admin and application connections succeed for the supplied credentials.
- Database and role creation are idempotent (`already present` when rerun).
- SQLAlchemy metadata either reports missing tables or `All tables already exist`.
- Migrations list pending files and finish with `Applied N migrations` (a new database reports `Applied 1 migrations` for `000_base.sql`).
After a successful run the target database contains all application tables plus `schema_migrations`, and that table records each applied migration file. New installations only record `000_base.sql`; upgraded environments retain historical entries alongside the baseline.
### Seeding reference data
`scripts/seed_data.py` provides targeted control over the baseline datasets when the full setup script is not required:
```powershell
python scripts/seed_data.py --currencies --units --dry-run
python scripts/seed_data.py --currencies --units
```
The seeder upserts the canonical currency catalog (`USD`, `EUR`, `CLP`, `RMB`, `GBP`, `CAD`, `AUD`) using ASCII-safe symbols (`USD$`, `EUR`, etc.) and the measurement units referenced by the UI (`tonnes`, `kilograms`, `pounds`, `liters`, `cubic_meters`, `kilowatt_hours`). The setup script invokes the same seeder when `--seed-data` is provided and verifies the expected rows afterward, warning if any are missing or inactive.
### Rollback guidance
`scripts/setup_database.py` now tracks compensating actions when it creates the database or application role. If a later step fails, the script replays those rollback actions (dropping the newly created database or role and revoking grants) before exiting. Dry runs never register rollback steps and remain read-only.
If the script reports that some rollback steps could not complete—for example because a connection cannot be established—rerun the script with `--dry-run` to confirm the desired end state and then apply the outstanding cleanup manually:
```powershell
python scripts/setup_database.py --ensure-database --ensure-role --dry-run -v
# Manual cleanup examples when automation cannot connect
psql -d postgres -c "DROP DATABASE IF EXISTS calminer"
psql -d postgres -c "DROP ROLE IF EXISTS calminer"
```
After a failure and rollback, rerun the full setup once the environment issues are resolved.

View File

@@ -1,6 +1,6 @@
--- ---
title: "CalMiner Architecture Documentation" title: 'CalMiner Architecture Documentation'
description: "arc42-based architecture documentation for the CalMiner project" description: 'arc42-based architecture documentation for the CalMiner project'
--- ---
# Architecture documentation (arc42 mapping) # Architecture documentation (arc42 mapping)
@@ -11,16 +11,32 @@ This folder mirrors the arc42 chapter structure (adapted to Markdown).
- [01 Introduction and Goals](01_introduction_and_goals.md) - [01 Introduction and Goals](01_introduction_and_goals.md)
- [02 Architecture Constraints](02_architecture_constraints.md) - [02 Architecture Constraints](02_architecture_constraints.md)
- [02_01 Technical Constraints](02_constraints/02_01_technical_constraints.md)
- [02_02 Organizational Constraints](02_constraints/02_02_organizational_constraints.md)
- [02_03 Regulatory Constraints](02_constraints/02_03_regulatory_constraints.md)
- [02_04 Environmental Constraints](02_constraints/02_04_environmental_constraints.md)
- [02_05 Performance Constraints](02_constraints/02_05_performance_constraints.md)
- [03 Context and Scope](03_context_and_scope.md) - [03 Context and Scope](03_context_and_scope.md)
- [03_01 Architecture Scope](03_scope/03_01_architecture_scope.md)
- [04 Solution Strategy](04_solution_strategy.md) - [04 Solution Strategy](04_solution_strategy.md)
- [04_01 Client-Server Architecture](04_strategy/04_01_client_server_architecture.md)
- [04_02 Technology Choices](04_strategy/04_02_technology_choices.md)
- [04_03 Trade-offs](04_strategy/04_03_trade_offs.md)
- [04_04 Future Considerations](04_strategy/04_04_future_considerations.md)
- [05 Building Block View](05_building_block_view.md) - [05 Building Block View](05_building_block_view.md)
- [05_01 Architecture Overview](05_blocks/05_01_architecture_overview.md)
- [05_02 Backend Components](05_blocks/05_02_backend_components.md)
- [05_03 Frontend Components](05_blocks/05_03_frontend_components.md)
- [05_03 Theming](05_blocks/05_03_theming.md)
- [05_04 Middleware & Utilities](05_blocks/05_04_middleware_utilities.md)
- [06 Runtime View](06_runtime_view.md) - [06 Runtime View](06_runtime_view.md)
- [07 Deployment View](07_deployment_view.md) - [07 Deployment View](07_deployment_view.md)
- [Testing & CI](07_deployment/07_01_testing_ci.md.md) - [07_01 Testing & CI](07_deployment/07_01_testing_ci.md.md)
- [07_02 Database](07_deployment/07_02_database.md)
- [08 Concepts](08_concepts.md) - [08 Concepts](08_concepts.md)
- [08_01 Security](08_concepts/08_01_security.md)
- [08_02 Data Models](08_concepts/08_02_data_models.md)
- [09 Architecture Decisions](09_architecture_decisions.md) - [09 Architecture Decisions](09_architecture_decisions.md)
- [10 Quality Requirements](10_quality_requirements.md) - [10 Quality Requirements](10_quality_requirements.md)
- [11 Technical Risks](11_technical_risks.md) - [11 Technical Risks](11_technical_risks.md)
- [12 Glossary](12_glossary.md) - [12 Glossary](12_glossary.md)
- [13 UI and Style](13_ui_and_style.md)
- [15 Development Setup](15_development_setup.md)

View File

@@ -1,50 +1,77 @@
# 15 Development Setup Guide # Development Environment Setup
This document outlines the local development environment and steps to get the project running. This document outlines the local development environment and steps to get the project running.
## Prerequisites ## Prerequisites
- Python (version 3.10+) - Python (version 3.11+)
- PostgreSQL (version 13+) - PostgreSQL (version 13+)
- Git - Git
- Docker and Docker Compose (optional, for containerized development)
## Clone and Project Setup ## Clone and Project Setup
````powershell ```powershell
# Clone the repository # Clone the repository
git clone https://git.allucanget.biz/allucanget/calminer.git git clone https://git.allucanget.biz/allucanget/calminer.git
cd calminer cd calminer
```python ```
## Virtual Environment ## Development with Docker Compose (Recommended)
For a quick setup without installing PostgreSQL locally, use Docker Compose:
```powershell
# Start services
docker-compose up
# The app will be available at http://localhost:8000
# Database is automatically set up
```
To run in background:
```powershell
docker-compose up -d
```
To stop:
```powershell
docker-compose down
```
## Manual Development Setup
### Virtual Environment
```powershell ```powershell
# Create and activate a virtual environment # Create and activate a virtual environment
python -m venv .venv python -m venv .venv
.\.venv\Scripts\Activate.ps1 .\.venv\Scripts\Activate.ps1
```python ```
## Install Dependencies ### Install Dependencies
```powershell ```powershell
pip install -r requirements.txt pip install -r requirements.txt
```python ```
## Database Setup ### Database Setup
1. Create database user: 1. Create database user:
```sql ```sql
CREATE USER calminer_user WITH PASSWORD 'your_password'; CREATE USER calminer_user WITH PASSWORD 'your_password';
```` ```
1. Create database: 1. Create database:
````sql ```sql
CREATE DATABASE calminer; CREATE DATABASE calminer;
```python ```
## Environment Variables ### Environment Variables
1. Copy `.env.example` to `.env` at project root. 1. Copy `.env.example` to `.env` at project root.
1. Edit `.env` to set database connection details: 1. Edit `.env` to set database connection details:
@@ -57,21 +84,21 @@ DATABASE_USER=calminer_user
DATABASE_PASSWORD=your_password DATABASE_PASSWORD=your_password
DATABASE_NAME=calminer DATABASE_NAME=calminer
DATABASE_SCHEMA=public DATABASE_SCHEMA=public
```` ```
1. The application uses `python-dotenv` to load these variables. A legacy `DATABASE_URL` value is still accepted if the granular keys are omitted. 1. The application uses `python-dotenv` to load these variables. A legacy `DATABASE_URL` value is still accepted if the granular keys are omitted.
## Running the Application ### Running the Application
````powershell ```powershell
# Start the FastAPI server # Start the FastAPI server
uvicorn main:app --reload uvicorn main:app --reload
```python ```
## Testing ## Testing
```powershell ```powershell
pytest pytest
```` ```
E2E tests use Playwright and a session-scoped `live_server` fixture that starts the app at `http://localhost:8001` for browser-driven tests. E2E tests use Playwright and a session-scoped `live_server` fixture that starts the app at `http://localhost:8001` for browser-driven tests.

View File

@@ -1,6 +1,6 @@
# 13 — UI, templates and styling # UI, templates and styling
This chapter collects UI integration notes, reusable template components, styling audit points and per-page UI data/actions. This document outlines the UI structure, template components, CSS variable conventions, and per-page data/actions for the CalMiner application.
## Reusable Template Components ## Reusable Template Components

View File

@@ -1,31 +0,0 @@
# Setup Script Idempotency Audit (2025-10-25)
This note captures the current evaluation of idempotent behaviour for `scripts/setup_database.py` and outlines follow-up actions.
## Admin Tasks
- **ensure_database**: guarded by `SELECT 1 FROM pg_database`; re-runs safely. Failure mode: network issues or lack of privileges surface as psycopg2 errors without additional context.
- **ensure_role**: checks `pg_roles`, creates role if missing, reapplies grants each time. Subsequent runs execute grants again but PostgreSQL tolerates repeated grants.
- **ensure_schema**: uses `information_schema` guard and respects `--dry-run`; idempotent when schema is `public` or already present.
## Application Tasks
- **initialize_schema**: relies on SQLAlchemy `create_all(checkfirst=True)`; repeatable. Dry-run output remains descriptive.
- **run_migrations**: new baseline workflow applies `000_base.sql` once and records legacy scripts as applied. Subsequent runs detect the baseline in `schema_migrations` and skip reapplication.
## Seeding
- `seed_baseline_data` seeds currencies and measurement units with upsert logic. Verification now raises on missing data, preventing silent failures.
- Running `--seed-data` repeatedly performs `ON CONFLICT` updates, making the operation safe.
## Outstanding Risks
1. Baseline migration relies on legacy files being present when first executed; if removed beforehand, old entries are never marked. (Low risk given repository state.)
2. `ensure_database` and `ensure_role` do not wrap SQL execution errors with additional context beyond psycopg2 messages.
3. Baseline verification assumes migrations and seeding run in the same process; manual runs of `scripts/seed_data.py` without the baseline could still fail.
## Recommended Actions
- Add regression tests ensuring repeated executions of key CLI paths (`--run-migrations`, `--seed-data`) result in no-op behaviour after the first run.
- Extend logging/error handling for admin operations to provide clearer messages on repeated failures.
- Consider a preflight check when migrations directory lacks legacy files but baseline is pending, warning about potential drift.

View File

@@ -1,29 +0,0 @@
# Setup Script Logging Audit (2025-10-25)
The following observations capture current logging behaviour in `scripts/setup_database.py` and highlight areas requiring improved error handling and messaging.
## Connection Validation
- `validate_admin_connection` and `validate_application_connection` log entry/exit messages and raise `RuntimeError` with context if connection fails. This coverage is sufficient.
- `ensure_database` logs creation states but does not surface connection or SQL exceptions beyond the initial connection acquisition. When the inner `cursor.execute` calls fail, the exceptions bubble without contextual logging.
## Migration Runner
- Lists pending migrations and logs each application attempt.
- When the baseline is pending, the script logs whether it is a dry-run or live application and records legacy file marking. However, if `_apply_migration_file` raises an exception, the caller re-raises after logging the failure; there is no wrapping message guiding users toward manual cleanup.
- Legacy migration marking happens silently (just info logs). Failures during the insert into `schema_migrations` would currently propagate without added guidance.
## Seeding Workflow
- `seed_baseline_data` announces each seeding phase and skips verification in dry-run mode with a log breadcrumb.
- `_verify_seeded_data` warns about missing currencies/units and inactive defaults but does **not** raise errors, meaning CI can pass while the database is incomplete. There is no explicit log when verification succeeds.
- `_seed_units` logs when the `measurement_unit` table is missing, which is helpful, but the warning is the only feedback; no exception is raised.
## Suggested Enhancements
1. Wrap baseline application and legacy marking in `try/except` blocks that log actionable remediation steps before re-raising.
2. Promote seed verification failures (missing or inactive records) to exceptions so automated workflows fail fast; add success logs for clarity.
3. Add contextual logging around currency/measurement-unit insert failures, particularly around `execute_values` calls, to aid debugging malformed data.
4. Introduce structured logging (log codes or phases) for major steps (`CONNECT`, `MIGRATE`, `SEED`, `VERIFY`) to make scanning log files easier.
These findings inform the remaining TODO subtasks for enhanced error handling.

View File

@@ -1,348 +1,87 @@
# Quickstart & Expanded Project Documentation # Developer Quickstart
This document contains the expanded development, usage, testing, and migration guidance moved out of the top-level README for brevity. - [Developer Quickstart](#developer-quickstart)
- [Development](#development)
- [User Interface](#user-interface)
- [Testing](#testing)
- [Staging](#staging)
- [Deployment](#deployment)
- [Using Docker Compose](#using-docker-compose)
- [Manual Docker Deployment](#manual-docker-deployment)
- [Database Deployment \& Migrations](#database-deployment--migrations)
- [Usage Overview](#usage-overview)
- [Theme configuration](#theme-configuration)
- [Where to look next](#where-to-look-next)
This document provides a quickstart guide for developers to set up and run the CalMiner application locally.
## Development ## Development
### Prerequisites See [Development Setup](docs/developer/development_setup.md).
- Python 3.10+ ### User Interface
- Node.js 20+ (for Playwright-driven E2E tests)
- Docker (optional, required for containerized workflows)
- Git
To get started locally: There is a dedicated [UI and Style](docs/developer/ui_and_style.md) guide for frontend contributors.
```powershell ### Testing
# Clone the repository
git clone https://git.allucanget.biz/allucanget/calminer.git
cd calminer
# Create and activate a virtual environment Testing is described in the [Testing CI](docs/architecture/07_deployment/07_01_testing_ci.md) document.
python -m venv .venv
.\.venv\Scripts\Activate.ps1
# Install dependencies ## Staging
pip install -r requirements.txt
# Start the development server Staging environment setup is covered in [Staging Environment Setup](docs/developer/staging_environment_setup.md).
uvicorn main:app --reload
## Deployment
The application can be deployed using Docker containers.
### Using Docker Compose
For production deployment, use the provided `docker-compose.yml`:
```bash
docker-compose up -d
``` ```
## Docker-based setup This starts the FastAPI app and PostgreSQL database.
To build and run the application using Docker instead of a local Python environment: ### Manual Docker Deployment
```powershell Build and run the container manually:
# Build the application image (multi-stage build keeps runtime small)
docker build -t calminer:latest .
# Start the container on port 8000 ```bash
docker run --rm -p 8000:8000 calminer:latest docker build -t calminer .
docker run -d -p 8000:8000 \
# Supply environment variables (e.g., Postgres connection) -e DATABASE_HOST=your-postgres-host \
docker run --rm -p 8000:8000 ^ -e DATABASE_USER=calminer \
-e DATABASE_DRIVER="postgresql" ^ -e DATABASE_PASSWORD=your-password \
-e DATABASE_HOST="db.host" ^ -e DATABASE_NAME=calminer_db \
-e DATABASE_PORT="5432" ^ calminer
-e DATABASE_USER="calminer" ^
-e DATABASE_PASSWORD="s3cret" ^
-e DATABASE_NAME="calminer" ^
-e DATABASE_SCHEMA="public" ^
calminer:latest
``` ```
If you maintain a Postgres or Redis dependency locally, consider authoring a `docker compose` stack that pairs them with the app container. The Docker image expects the database to be reachable and migrations executed before serving traffic. Ensure the database is set up and migrated before running.
### Compose-driven development stack ### Database Deployment & Migrations
The repository ships with `docker-compose.dev.yml`, wiring the API and database into a single development stack. It defaults to the Debian-based `postgres:16` image so UTF-8 locales are available without additional tooling and mounts persistent data in the `pg_data_dev` volume. See the [Database Deployment & Migrations](docs/architecture/07_deployment/07_02_database_deployment_migrations.md) document for details on database deployment and migration strategies.
Typical workflow (run from the repository root):
```powershell
# Build images and ensure dependencies are cached
docker compose -f docker-compose.dev.yml build
# Start FastAPI and Postgres in the background
docker compose -f docker-compose.dev.yml up -d
# Tail logs for both services
docker compose -f docker-compose.dev.yml logs -f
# Stop services but keep the database volume for reuse
docker compose -f docker-compose.dev.yml down
# Remove the persistent Postgres volume when you need a clean slate
docker volume rm calminer_pg_data_dev # optional; confirm exact name with `docker volume ls`
```
Environment variables used by the containers live directly in the compose file (`DATABASE_HOST=db`, `DATABASE_NAME=calminer_dev`, etc.), so no extra `.env` file is required. Adjust or override them via `docker compose ... -e VAR=value` if necessary.
For a deeper walkthrough (including volume naming conventions, port mappings, and how the stack fits into the broader architecture), cross-check `docs/architecture/15_development_setup.md`. That chapter mirrors the compose defaults captured here so both documents stay in sync.
### Compose-driven test stack
Use `docker-compose.test.yml` to spin up a Postgres 16 container and execute the Python test suite in a disposable worker container:
```powershell
# Build images used by the test workflow
docker compose -f docker-compose.test.yml build
# Run the default target (unit tests)
docker compose -f docker-compose.test.yml run --rm tests
# Run a specific target (e.g., full suite)
docker compose -f docker-compose.test.yml run --rm -e PYTEST_TARGET=tests tests
# Tear everything down and drop the test database volume
docker compose -f docker-compose.test.yml down -v
```
The `tests` service prepares the database via `scripts/setup_database.py` before invoking pytest, ensuring migrations and seed data mirror CI behaviour. Named volumes (`pip_cache_test`, `pg_data_test`) cache dependencies and data between runs; remove them with `down -v` whenever you want a pristine environment. An `api` service is available on `http://localhost:8001` for spot-checking API responses against the same test database.
### Compose-driven production stack
Use `docker-compose.prod.yml` for operator-managed deployments. The file defines:
- `api`: FastAPI container with configurable CPU/memory limits and a `/health` probe.
- `traefik`: Optional (enable with the `reverse-proxy` profile) to terminate TLS and route traffic based on `CALMINER_DOMAIN`.
- `postgres`: Optional (enable with the `local-db` profile) when a managed database is unavailable; persists data in `pg_data_prod` and mounts `./backups`.
Commands (run from the repository root):
```powershell
# Prepare environment variables once per environment
copy config\setup_production.env.example config\setup_production.env
# Start API behind Traefik
docker compose ^
--env-file config/setup_production.env ^
-f docker-compose.prod.yml ^
--profile reverse-proxy ^
up -d
# Add the local Postgres profile when running without managed DB
docker compose ^
--env-file config/setup_production.env ^
-f docker-compose.prod.yml ^
--profile reverse-proxy --profile local-db ^
up -d
# Apply migrations/seed data
docker compose ^
--env-file config/setup_production.env ^
-f docker-compose.prod.yml ^
run --rm api ^
python scripts/setup_database.py --run-migrations --seed-data
# Check health (FastAPI exposes /health)
docker compose -f docker-compose.prod.yml ps
# Stop services (volumes persist unless -v is supplied)
docker compose -f docker-compose.prod.yml down
```
Key environment variables (documented in `config/setup_production.env.example`): container image tag, domain/ACME email, published ports, network name, and resource limits (`API_LIMIT_CPUS`, `API_LIMIT_MEMORY`, etc.).
For deployment topology diagrams and operational sequencing, see [docs/architecture/07_deployment_view.md](architecture/07_deployment_view.md#production-docker-compose-topology).
## Usage Overview ## Usage Overview
- **Run the application**: Follow the [Development Setup](docs/developer/development_setup.md) to get the application running locally.
- **Access the UI**: Open your web browser and navigate to `http://localhost:8000/ui` to access the user interface.
- **API base URL**: `http://localhost:8000/api` - **API base URL**: `http://localhost:8000/api`
- Key routes include creating scenarios, parameters, costs, consumption, production, equipment, maintenance, and reporting summaries. See the `routes/` directory for full details. - Key routes include creating scenarios, parameters, costs, consumption, production, equipment, maintenance, and reporting summaries. See the `routes/` directory for full details.
- **UI base URL**: `http://localhost:8000/ui`
### Theme configuration ### Theme configuration
- Open `/ui/settings` to access the Settings dashboard. The **Theme Colors** form lists every CSS variable persisted in the `application_setting` table. Updates apply immediately across the UI once saved. Theming is laid out in [Theming](docs/architecture/05_03_theming.md).
- Use the accompanying API endpoints for automation or integration tests:
- `GET /api/settings/css` returns the active variables, defaults, and metadata describing any environment overrides.
- `PUT /api/settings/css` accepts a payload such as `{"variables": {"--color-primary": "#112233"}}` and persists the change unless an environment override is in place.
- Environment variables prefixed with `CALMINER_THEME_` win over database values. For example, setting `CALMINER_THEME_COLOR_PRIMARY="#112233"` renders the corresponding input read-only and surfaces the override in the Environment Overrides table.
- Acceptable values include hex (`#rrggbb` or `#rrggbbaa`), `rgb()/rgba()`, and `hsl()/hsla()` expressions with the expected number of components. Invalid inputs trigger a validation error and the API responds with HTTP 422.
## Dashboard Preview
1. Start the FastAPI server and navigate to `/`.
2. Review the headline metrics, scenario snapshot table, and cost/activity charts sourced from the current database state.
3. Use the "Refresh Dashboard" button to pull freshly aggregated data via `/ui/dashboard/data` without reloading the page.
## Testing
Run the unit test suite:
```powershell
pytest
```
E2E tests use Playwright and a session-scoped `live_server` fixture that starts the app at `http://localhost:8001` for browser-driven tests.
## Migrations & Baseline
A consolidated baseline migration (`scripts/migrations/000_base.sql`) captures all schema changes required for a fresh installation. The script is idempotent: it creates the `currency` and `measurement_unit` reference tables, provisions the `application_setting` store for configurable UI/system options, ensures consumption and production records expose unit metadata, and enforces the foreign keys used by CAPEX and OPEX.
Configure granular database settings in your PowerShell session before running migrations:
```powershell
$env:DATABASE_DRIVER = 'postgresql'
$env:DATABASE_HOST = 'localhost'
$env:DATABASE_PORT = '5432'
$env:DATABASE_USER = 'calminer'
$env:DATABASE_PASSWORD = 's3cret'
$env:DATABASE_NAME = 'calminer'
$env:DATABASE_SCHEMA = 'public'
python scripts/setup_database.py --run-migrations --seed-data --dry-run
python scripts/setup_database.py --run-migrations --seed-data
```
The dry-run invocation reports which steps would execute without making changes. The live run applies the baseline (if not already recorded in `schema_migrations`) and seeds the reference data relied upon by the UI and API.
> When `--seed-data` is supplied without `--run-migrations`, the bootstrap script automatically applies any pending SQL migrations first so the `application_setting` table (and future settings-backed features) are present before seeding.
>
> The application still accepts `DATABASE_URL` as a fallback if the granular variables are not set.
## Database bootstrap workflow
Provision or refresh a database instance with `scripts/setup_database.py`. Populate the required environment variables (an example lives at `config/setup_test.env.example`) and run:
```powershell
# Load test credentials (PowerShell)
Get-Content .\config\setup_test.env.example |
ForEach-Object {
if ($_ -and -not $_.StartsWith('#')) {
$name, $value = $_ -split '=', 2
Set-Item -Path Env:$name -Value $value
}
}
# Dry-run to inspect the planned actions
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v
# Execute the full workflow
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v
```
Typical log output confirms:
- Admin and application connections succeed for the supplied credentials.
- Database and role creation are idempotent (`already present` when rerun).
- SQLAlchemy metadata either reports missing tables or `All tables already exist`.
- Migrations list pending files and finish with `Applied N migrations` (a new database reports `Applied 1 migrations` for `000_base.sql`).
After a successful run the target database contains all application tables plus `schema_migrations`, and that table records each applied migration file. New installations only record `000_base.sql`; upgraded environments retain historical entries alongside the baseline.
### Local Postgres via Docker Compose
For local validation without installing Postgres directly, use the provided compose file:
```powershell
docker compose -f docker-compose.postgres.yml up -d
```
#### Summary
1. Start the Postgres container with `docker compose -f docker-compose.postgres.yml up -d`.
2. Export the granular database environment variables (host `127.0.0.1`, port `5433`, database `calminer_local`, user/password `calminer`/`secret`).
3. Run the setup script twice: first with `--dry-run` to preview actions, then without it to apply changes.
4. When finished, stop and optionally remove the container/volume using `docker compose -f docker-compose.postgres.yml down`.
The service exposes Postgres 16 on `localhost:5433` with database `calminer_local` and role `calminer`/`secret`. When the container is running, set the granular environment variables before invoking the setup script:
```powershell
$env:DATABASE_DRIVER = 'postgresql'
$env:DATABASE_HOST = '127.0.0.1'
$env:DATABASE_PORT = '5433'
$env:DATABASE_USER = 'calminer'
$env:DATABASE_PASSWORD = 'secret'
$env:DATABASE_NAME = 'calminer_local'
$env:DATABASE_SCHEMA = 'public'
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v
```
When testing is complete, shut down the container (and optional persistent volume) with:
```powershell
docker compose -f docker-compose.postgres.yml down
docker volume rm calminer_postgres_local_postgres_data # optional cleanup
```
### Seeding reference data
`scripts/seed_data.py` provides targeted control over the baseline datasets when the full setup script is not required:
```powershell
python scripts/seed_data.py --currencies --units --dry-run
python scripts/seed_data.py --currencies --units
```
The seeder upserts the canonical currency catalog (`USD`, `EUR`, `CLP`, `RMB`, `GBP`, `CAD`, `AUD`) using ASCII-safe symbols (`USD$`, `EUR`, etc.) and the measurement units referenced by the UI (`tonnes`, `kilograms`, `pounds`, `liters`, `cubic_meters`, `kilowatt_hours`). The setup script invokes the same seeder when `--seed-data` is provided and verifies the expected rows afterward, warning if any are missing or inactive.
### Rollback guidance
`scripts/setup_database.py` now tracks compensating actions when it creates the database or application role. If a later step fails, the script replays those rollback actions (dropping the newly created database or role and revoking grants) before exiting. Dry runs never register rollback steps and remain read-only.
If the script reports that some rollback steps could not complete—for example because a connection cannot be established—rerun the script with `--dry-run` to confirm the desired end state and then apply the outstanding cleanup manually:
```powershell
python scripts/setup_database.py --ensure-database --ensure-role --dry-run -v
# Manual cleanup examples when automation cannot connect
psql -d postgres -c "DROP DATABASE IF EXISTS calminer"
psql -d postgres -c "DROP ROLE IF EXISTS calminer"
```
After a failure and rollback, rerun the full setup once the environment issues are resolved.
### CI pipeline environment
The `.gitea/workflows/test.yml` job spins up a temporary PostgreSQL 16 container and runs the setup script twice: once with `--dry-run` to validate the plan and again without it to apply migrations and seeds. No external secrets are required; the workflow sets the following environment variables for both invocations and for pytest:
| Variable | Value | Purpose |
| ----------------------------- | ------------- | ------------------------------------------------- |
| `DATABASE_DRIVER` | `postgresql` | Signals the driver to the setup script |
| `DATABASE_HOST` | `postgres` | Hostname of the Postgres job service container |
| `DATABASE_PORT` | `5432` | Default service port |
| `DATABASE_NAME` | `calminer_ci` | Target database created by the workflow |
| `DATABASE_USER` | `calminer` | Application role used during tests |
| `DATABASE_PASSWORD` | `secret` | Password for both admin and app role |
| `DATABASE_SCHEMA` | `public` | Default schema for the tests |
| `DATABASE_SUPERUSER` | `calminer` | Setup script uses the same role for admin actions |
| `DATABASE_SUPERUSER_PASSWORD` | `secret` | Matches the Postgres service password |
| `DATABASE_SUPERUSER_DB` | `calminer_ci` | Database to connect to for admin operations |
The workflow also updates `DATABASE_URL` for pytest to point at the CI Postgres instance. Existing tests continue to work unchanged, since SQLAlchemy reads the URL exactly as it does locally.
Because the workflow provisions everything inline, no repository or organization secrets need to be configured for basic CI runs. If you later move the setup step to staging or production pipelines, replace these inline values with secrets managed by the CI platform. When running on self-hosted runners behind an HTTP proxy or apt cache, ensure Playwright dependencies and OS packages inherit the same proxy settings that the workflow configures prior to installing browsers.
### Staging environment workflow
Use the staging checklist in `docs/staging_environment_setup.md` when running the setup script against the shared environment. A sample variable file (`config/setup_staging.env`) records the expected inputs (host, port, admin/application roles); copy it outside the repository or load the values securely via your shell before executing the workflow.
Recommended execution order:
1. Dry run with `--dry-run -v` to confirm connectivity and review planned operations. Capture the output to `reports/setup_staging_dry_run.log` (or similar) for auditing.
2. Execute the live run with the same flags minus `--dry-run` to provision the database, role grants, migrations, and seed data. Save the log as `reports/setup_staging_apply.log`.
3. Repeat the dry run to verify idempotency and record the result (for example `reports/setup_staging_post_apply.log`).
## Database Objects
The database contains tables such as `capex`, `opex`, `chemical_consumption`, `fuel_consumption`, `water_consumption`, `scrap_consumption`, `production_output`, `equipment_operation`, `ore_batch`, `exchange_rate`, and `simulation_result`.
## Current implementation status (2025-10-21)
- Currency normalization: a `currency` table and backfill scripts exist; routes accept `currency_id` and `currency_code` for compatibility.
- Simulation engine: scaffolding in `services/simulation.py` and `/api/simulations/run` return in-memory results; persistence to `models/simulation_result` is planned.
- Reporting: `services/reporting.py` provides summary statistics used by `POST /api/reporting/summary`.
- Tests & coverage: unit and E2E suites exist; recent local coverage is >90%.
- Remaining work: authentication, persist simulation runs, CI/CD and containerization.
## Where to look next ## Where to look next
- Architecture overview & chapters: [architecture](architecture/README.md) (per-chapter files under `docs/architecture/`) - Architecture overview & chapters: [architecture](architecture/README.md) (per-chapter files under `docs/architecture/`)
- [Testing & CI](architecture/07_deployment/07_01_testing_ci.md.md) - [Testing & CI](architecture/07_deployment/07_01_testing_ci.md.md)
- [Development setup](architecture/15_development_setup.md) - [Development setup](developer/development_setup.md)
- Implementation plan & roadmap: [Solution strategy](architecture/04_solution_strategy.md) - Implementation plan & roadmap: [Solution strategy](architecture/04_solution_strategy.md)
- Routes: [routes](../routes/) - Routes: [routes](../routes/)
- Services: [services](../services/) - Services: [services](../services/)

37
docs/roadmap.md Normal file
View File

@@ -0,0 +1,37 @@
# Roadmap
## Overview
## Scenario Enhancements
For each scenario, the goal is to evaluate financial viability, operational efficiency, and risk factors associated with the mining project. This data is used to perform calculations, generate reports, and visualize results through charts and dashboards, enabling users to make informed decisions based on comprehensive analysis.
### Scenario & Data Management
Scenarios are the core organizational unit within CalMiner, allowing users to create, manage, and analyze different mining project configurations. Each scenario encapsulates a unique set of parameters and data inputs that define the mining operation being modeled.
#### Scenario Creation
Users can create new scenarios by providing a unique name and description. The system will generate a new scenario with default parameters, which can be customized later.
#### Scenario Management
Users can manage existing scenarios by modifying their parameters, adding new data inputs, or deleting them as needed.
#### Data Inputs
Users can define and manage various data inputs for each scenario, including:
- **Geological Data**: Input data related to the geological characteristics of the mining site.
- **Operational Parameters**: Define parameters such as mining methods, equipment specifications, and workforce details.
- **Financial Data**: Input cost structures, revenue models, and financial assumptions.
- **Environmental Data**: Include data related to environmental impact, regulations, and sustainability practices.
- **Technical Data**: Specify technical parameters such as ore grades, recovery rates, and processing methods.
- **Social Data**: Incorporate social impact assessments, community engagement plans, and stakeholder analysis.
- **Regulatory Data**: Include data related to legal and regulatory requirements, permits, and compliance measures.
- **Market Data**: Input market conditions, commodity prices, and economic indicators that may affect the mining operation.
- **Risk Data**: Define risk factors, probabilities, and mitigation strategies for the mining project.
- **Logistical Data**: Include data related to transportation, supply chain management, and infrastructure requirements.
- **Maintenance Data**: Input maintenance schedules, costs, and equipment reliability metrics.
- **Human Resources Data**: Define workforce requirements, training programs, and labor costs.
- **Health and Safety Data**: Include data related to workplace safety protocols, incident rates, and health programs.

View File

@@ -1,78 +0,0 @@
# Baseline Seed Data Plan
This document captures the datasets that should be present in a fresh CalMiner installation and the structure required to manage them through `scripts/seed_data.py`.
## Currency Catalog
The `currency` table already exists and is seeded today via `scripts/seed_data.py`. The goal is to keep the canonical list in one place and ensure the default currency (USD) is always active.
| Code | Name | Symbol | Notes |
| ---- | ------------------- | ------ | ---------------------------------------- |
| USD | US Dollar | $ | Default currency (`DEFAULT_CURRENCY_CODE`) |
| EUR | Euro | EUR symbol | |
| CLP | Chilean Peso | $ | |
| RMB | Chinese Yuan | RMB symbol | |
| GBP | British Pound | GBP symbol | |
| CAD | Canadian Dollar | $ | |
| AUD | Australian Dollar | $ | |
Seeding behaviour:
- Upsert by ISO code; keep existing name/symbol when updated manually.
- Ensure `is_active` remains true for USD and defaults to true for new rows.
- Defer to runtime validation in `routes.currencies` for enforcing default behaviour.
## Measurement Units
UI routes (`routes/ui.py`) currently rely on the in-memory `MEASUREMENT_UNITS` list to populate dropdowns for consumption and production forms. To make this configurable and available to the API, introduce a dedicated `measurement_unit` table and seed it.
Proposed schema:
| Column | Type | Notes |
| ------------- | -------------- | ------------------------------------ |
| id | SERIAL / BIGINT | Primary key. |
| code | TEXT | Stable slug (e.g. `tonnes`). Unique. |
| name | TEXT | Display label. |
| symbol | TEXT | Short symbol (nullable). |
| unit_type | TEXT | Category (`mass`, `volume`, `energy`).|
| is_active | BOOLEAN | Default `true` for soft disabling. |
| created_at | TIMESTAMP | Optional `NOW()` default. |
| updated_at | TIMESTAMP | Optional `NOW()` trigger/default. |
Initial seed set (mirrors existing UI list plus type categorisation):
| Code | Name | Symbol | Unit Type |
| --------------- | ---------------- | ------ | --------- |
| tonnes | Tonnes | t | mass |
| kilograms | Kilograms | kg | mass |
| pounds | Pounds | lb | mass |
| liters | Liters | L | volume |
| cubic_meters | Cubic Meters | m3 | volume |
| kilowatt_hours | Kilowatt Hours | kWh | energy |
Seeding behaviour:
- Upsert rows by `code`.
- Preserve `unit_type` and `symbol` unless explicitly changed via administration tooling.
- Continue surfacing unit options to the UI by querying this table instead of the static constant.
## Default Settings
The application expects certain defaults to exist:
- **Default currency**: enforced by `routes.currencies._ensure_default_currency`; ensure seeds keep USD active.
- **Fallback measurement unit**: UI currently auto-selects the first option in the list. Once units move to the database, expose an application setting to choose a fallback (future work tracked under "Application Settings management").
## Seeding Structure Updates
To support the datasets above:
1. Extend `scripts/seed_data.py` with a `SeedDataset` registry so each dataset (currencies, units, future defaults) can declare its loader/upsert function and optional dependencies.
2. Add a `--dataset` CLI selector for targeted seeding while keeping `--all` as the default for `setup_database.py` integrations.
3. Update `scripts/setup_database.py` to:
- Run migration ensuring `measurement_unit` table exists.
- Execute the unit seeder after currencies when `--seed-data` is supplied.
- Verify post-seed counts, logging which dataset was inserted/updated.
4. Adjust UI routes to load measurement units from the database and remove the hard-coded list once the table is available.
This plan aligns with the TODO item for seeding initial data and lays the groundwork for consolidating migrations around a single baseline file that introduces both the schema and seed data in an idempotent manner.

View File

@@ -1,4 +1,5 @@
fastapi fastapi
pydantic>=2.0,<3.0
uvicorn uvicorn
sqlalchemy sqlalchemy
psycopg2-binary psycopg2-binary

View File

@@ -250,7 +250,7 @@ class DatabaseSetup:
descriptor = self._describe_connection( descriptor = self._describe_connection(
self.config.admin_user, self.config.admin_database self.config.admin_user, self.config.admin_database
) )
logger.info("Validating admin connection (%s)", descriptor) logger.info("[CONNECT] Validating admin connection (%s)", descriptor)
try: try:
with self._admin_connection(self.config.admin_database) as conn: with self._admin_connection(self.config.admin_database) as conn:
with conn.cursor() as cursor: with conn.cursor() as cursor:
@@ -261,13 +261,14 @@ class DatabaseSetup:
"Check DATABASE_ADMIN_URL or DATABASE_SUPERUSER settings." "Check DATABASE_ADMIN_URL or DATABASE_SUPERUSER settings."
f" Target: {descriptor}" f" Target: {descriptor}"
) from exc ) from exc
logger.info("Admin connection verified (%s)", descriptor) logger.info("[CONNECT] Admin connection verified (%s)", descriptor)
def validate_application_connection(self) -> None: def validate_application_connection(self) -> None:
descriptor = self._describe_connection( descriptor = self._describe_connection(
self.config.user, self.config.database self.config.user, self.config.database
) )
logger.info("Validating application connection (%s)", descriptor) logger.info(
"[CONNECT] Validating application connection (%s)", descriptor)
try: try:
with self._application_connection() as conn: with self._application_connection() as conn:
with conn.cursor() as cursor: with conn.cursor() as cursor:
@@ -278,7 +279,8 @@ class DatabaseSetup:
"Ensure the role exists and credentials are correct. " "Ensure the role exists and credentials are correct. "
f"Target: {descriptor}" f"Target: {descriptor}"
) from exc ) from exc
logger.info("Application connection verified (%s)", descriptor) logger.info(
"[CONNECT] Application connection verified (%s)", descriptor)
def ensure_database(self) -> None: def ensure_database(self) -> None:
"""Create the target database when it does not already exist.""" """Create the target database when it does not already exist."""
@@ -586,31 +588,28 @@ class DatabaseSetup:
except RuntimeError: except RuntimeError:
raise raise
def _connect(self, dsn: str, descriptor: str) -> PGConnection:
try:
return psycopg2.connect(dsn)
except psycopg2.Error as exc:
raise RuntimeError(
f"Unable to establish connection. Target: {descriptor}"
) from exc
def _admin_connection(self, database: Optional[str] = None) -> PGConnection: def _admin_connection(self, database: Optional[str] = None) -> PGConnection:
target_db = database or self.config.admin_database target_db = database or self.config.admin_database
dsn = self.config.admin_dsn(database) dsn = self.config.admin_dsn(database)
descriptor = self._describe_connection( descriptor = self._describe_connection(
self.config.admin_user, target_db self.config.admin_user, target_db
) )
try: return self._connect(dsn, descriptor)
return psycopg2.connect(dsn)
except psycopg2.Error as exc:
raise RuntimeError(
"Unable to establish admin connection. " f"Target: {descriptor}"
) from exc
def _application_connection(self) -> PGConnection: def _application_connection(self) -> PGConnection:
dsn = self.config.application_dsn() dsn = self.config.application_dsn()
descriptor = self._describe_connection( descriptor = self._describe_connection(
self.config.user, self.config.database self.config.user, self.config.database
) )
try: return self._connect(dsn, descriptor)
return psycopg2.connect(dsn)
except psycopg2.Error as exc:
raise RuntimeError(
"Unable to establish application connection. "
f"Target: {descriptor}"
) from exc
def initialize_schema(self) -> None: def initialize_schema(self) -> None:
"""Create database objects from SQLAlchemy metadata if missing.""" """Create database objects from SQLAlchemy metadata if missing."""
@@ -704,6 +703,41 @@ class DatabaseSetup:
cursor, schema_name cursor, schema_name
) )
self._handle_baseline_migration(
cursor, schema_name, baseline_path, baseline_name, migration_files, applied
)
pending = [
path for path in migration_files if path.name not in applied
]
if not pending:
logger.info("No pending migrations")
return
logger.info(
"Pending migrations: %s",
", ".join(path.name for path in pending),
)
if self.dry_run:
logger.info("Dry run: skipping migration execution")
return
for path in pending:
self._apply_migration_file(cursor, schema_name, path)
logger.info("Applied %d migrations", len(pending))
def _handle_baseline_migration(
self,
cursor: extensions.cursor,
schema_name: str,
baseline_path: Path,
baseline_name: str,
migration_files: list[Path],
applied: set[str],
) -> None:
if baseline_path.exists() and baseline_name not in applied: if baseline_path.exists() and baseline_name not in applied:
if self.dry_run: if self.dry_run:
logger.info( logger.info(
@@ -712,7 +746,7 @@ class DatabaseSetup:
) )
else: else:
logger.info( logger.info(
"Baseline migration '%s' pending; applying and marking older migrations", "[MIGRATE] Baseline migration '%s' pending; applying and marking older migrations",
baseline_name, baseline_name,
) )
try: try:
@@ -728,6 +762,18 @@ class DatabaseSetup:
) )
raise raise
applied.add(baseline_applied) applied.add(baseline_applied)
self._mark_legacy_migrations_as_applied(
cursor, schema_name, migration_files, baseline_name, applied
)
def _mark_legacy_migrations_as_applied(
self,
cursor: extensions.cursor,
schema_name: str,
migration_files: list[Path],
baseline_name: str,
applied: set[str],
) -> None:
legacy_files = [ legacy_files = [
path path
for path in migration_files for path in migration_files
@@ -762,28 +808,6 @@ class DatabaseSetup:
legacy.name, legacy.name,
) )
pending = [
path for path in migration_files if path.name not in applied
]
if not pending:
logger.info("No pending migrations")
return
logger.info(
"Pending migrations: %s",
", ".join(path.name for path in pending),
)
if self.dry_run:
logger.info("Dry run: skipping migration execution")
return
for path in pending:
self._apply_migration_file(cursor, schema_name, path)
logger.info("Applied %d migrations", len(pending))
def _apply_migration_file( def _apply_migration_file(
self, self,
cursor, cursor,
@@ -847,10 +871,18 @@ class DatabaseSetup:
dry_run=dry_run, dry_run=dry_run,
verbose=0, verbose=0,
) )
try:
seed_data.run_with_namespace(seed_args, config=self.config) seed_data.run_with_namespace(seed_args, config=self.config)
except Exception:
logger.error(
"[SEED] Failed during baseline data seeding. "
"Review seed_data.py and rerun with --dry-run for diagnostics.",
exc_info=True,
)
raise
if dry_run: if dry_run:
logger.info("Dry run: skipped seed verification") logger.info("[SEED] Dry run: skipped seed verification")
return return
expected_currencies = { expected_currencies = {
@@ -896,7 +928,7 @@ class DatabaseSetup:
raise RuntimeError(message) raise RuntimeError(message)
logger.info( logger.info(
"Verified %d seeded currencies present", "[VERIFY] Verified %d seeded currencies present",
len(found_codes), len(found_codes),
) )
@@ -918,7 +950,8 @@ class DatabaseSetup:
logger.error(message) logger.error(message)
raise RuntimeError(message) raise RuntimeError(message)
else: else:
logger.info("Verified default currency 'USD' active") logger.info(
"[VERIFY] Verified default currency 'USD' active")
if expected_unit_codes: if expected_unit_codes:
try: try: