28 Commits

Author SHA1 Message Date
e74ec79cc9 feat: Add staging environment setup guide and configuration files; update .gitignore
All checks were successful
Run Tests / test (push) Successful in 1m49s
2025-10-25 18:01:46 +02:00
f3ce095b71 docs: Add summary for Postgres container setup in quickstart guide 2025-10-25 17:05:49 +02:00
4e1658a638 refactor: Update CI configuration and local setup documentation; remove obsolete currency migration scripts 2025-10-25 16:59:35 +02:00
bff75a722e fix: Disable trust_env for httpx requests in live server and currency seeding fixtures
All checks were successful
Run Tests / test (push) Successful in 1m49s
2025-10-25 16:40:55 +02:00
d455320eea fix: Update CI workflow to configure APT proxy and improve currency workflow tests
Some checks failed
Run Tests / test (push) Failing after 2m5s
2025-10-25 16:34:10 +02:00
2182f723f7 feat: Add step to install Playwright browsers in CI workflow
Some checks failed
Run Tests / test (push) Failing after 2m15s
2025-10-25 16:23:47 +02:00
b3e6546bb9 fix: Comment out pip caching step in CI workflow
Some checks failed
Run Tests / test (push) Failing after 16s
2025-10-25 16:22:02 +02:00
5c66bf7899 fix: Update import statement for client in currency workflow tests
Some checks failed
Run Tests / test (push) Has been cancelled
2025-10-25 16:21:26 +02:00
9bd5b60d7a fix: Update cache key to include requirements-test.txt for better dependency management
Some checks failed
Run Tests / test (push) Failing after 4m48s
2025-10-25 16:11:32 +02:00
01a702847d fix: Update database host in CI workflow to use service name instead of localhost
Some checks failed
Run Tests / test (push) Failing after 4m47s
2025-10-25 16:05:27 +02:00
1237902d55 feat: Add wait step for database service availability in CI workflow
Some checks failed
Run Tests / test (push) Failing after 5m46s
2025-10-25 15:57:03 +02:00
dd3f3141e3 feat: Add currency management feature with CRUD operations
Some checks failed
Run Tests / test (push) Failing after 5m2s
- Introduced a new template for currency overview and management (`currencies.html`).
- Updated footer to include attribution to AllYouCanGET.
- Added "Currencies" link to the main navigation header.
- Implemented end-to-end tests for currency creation, update, and activation toggling.
- Created unit tests for currency API endpoints, including creation, updating, and activation toggling.
- Added a fixture to seed default currencies for testing.
- Enhanced database setup tests to ensure proper seeding and migration handling.
2025-10-25 15:44:57 +02:00
659b66cc28 style: Update color variables in CSS and improve scenario prompts in templates
Some checks failed
Build and Push Docker Image / build-and-push (push) Successful in 1m51s
Deploy to Server / deploy (push) Failing after 2s
Run Tests / test (push) Failing after 4m44s
2025-10-25 11:16:24 +02:00
2b1771af86 fix: Update .gitignore to match test database naming pattern 2025-10-25 11:16:14 +02:00
9b0c29bade refactor: Simplify caching steps in CI workflow and remove redundant cache for test dependencies
Some checks failed
Build and Push Docker Image / build-and-push (push) Successful in 1m3s
Deploy to Server / deploy (push) Failing after 3s
Run Tests / test (push) Failing after 5m18s
2025-10-24 19:53:03 +02:00
f35607fedc feat: Add CI workflow for running tests and update database URL handling
Some checks failed
Build and Push Docker Image / build-and-push (push) Successful in 1m8s
Deploy to Server / deploy (push) Failing after 2s
Run Tests / test (push) Failing after 9m32s
2025-10-24 19:19:24 +02:00
28fea1f3fe docs: Update README and architecture documents with build instructions and detailed data models
Some checks failed
Build and Push Docker Image / build-and-push (push) Failing after 1m59s
Deploy to Server / deploy (push) Failing after 3s
2025-10-24 13:49:04 +02:00
ae19cd67c4 fix: Downgrade docker/build-push-action to v4 and update deploy script to use environment variables
Some checks failed
Build and Push Docker Image / build-and-push (push) Failing after 2m10s
Deploy to Server / deploy (push) Failing after 2s
2025-10-23 21:18:25 +02:00
e2f11a1459 refactor: Remove hardcoded production environment variables from Dockerfile
Some checks failed
Build and Push Docker Image / build-and-push (push) Failing after 1m9s
Deploy to Server / deploy (push) Failing after 3s
2025-10-23 19:41:13 +02:00
f864ad563a fix: Add proxy configuration in Dockerfile and remove hardcoded environment variables
Some checks failed
Build and Push Docker Image / build-and-push (push) Failing after 12s
Deploy to Server / deploy (push) Failing after 3s
2025-10-23 19:38:22 +02:00
93a2f54f97 fix: build workflow variables
Some checks failed
Build and Push Docker Image / build-and-push (push) Failing after 4m10s
Deploy to Server / deploy (push) Failing after 2s
2025-10-23 19:28:41 +02:00
8dedfb8f26 feat: Refactor database configuration to use granular environment variables; update Docker and CI/CD workflows accordingly
Some checks failed
Build and Push Docker Image / build-and-push (push) Failing after 6s
Deploy to Server / deploy (push) Failing after 2s
2025-10-23 19:17:24 +02:00
8c3062fd80 chore: Update action versions in build workflow and add playwright to requirements 2025-10-23 17:55:06 +02:00
0acc2cb3fe fix: Correct registry URL variable in build and deploy workflows
Some checks failed
Build and Push Docker Image / build-and-push (push) Failing after 5s
Deploy to Server / deploy (push) Failing after 2s
Run Tests / test (push) Failing after 4m43s
2025-10-23 17:44:15 +02:00
0e51a3883d fix: Update registry secrets in build and deploy workflows for consistency
Some checks failed
Build and Push Docker Image / build-and-push (push) Failing after 5s
Deploy to Server / deploy (push) Failing after 2s
Run Tests / test (push) Has been cancelled
2025-10-23 17:39:47 +02:00
f228eac61f feat: Add Docker support and CI/CD workflows documentation; include setup instructions for Docker-based deployment
Some checks failed
Build and Push Docker Image / build-and-push (push) Failing after 1m35s
Deploy to Server / deploy (push) Failing after 3s
Run Tests / test (push) Failing after 6m2s
2025-10-23 17:25:26 +02:00
119bbcc7a8 feat: Add Docker workflows for building, testing, and deploying the application; include Dockerfile for image creation 2025-10-23 17:06:14 +02:00
8aee7b0d74 refactor: Enhance architecture documentation with detailed sections on purpose, constraints, runtime view, deployment, and key concepts; add implementation plan and update quickstart reference 2025-10-23 16:59:15 +02:00
61 changed files with 5067 additions and 312 deletions

21
.dockerignore Normal file
View File

@@ -0,0 +1,21 @@
.venv
__pycache__
*.pyc
*.pyo
*.pyd
.Python
env/
venv/
.idea
.vscode
.git
.gitignore
.DS_Store
dist
build
*.egg-info
*.sqlite3
.env
.env.*
.Dockerfile
.dockerignore

View File

@@ -1,4 +1,14 @@
# Example environment variables for CalMiner # Example environment variables for CalMiner
# PostgreSQL database connection URL # PostgreSQL connection settings
database_url=postgresql://<user>:<password>@localhost:5432/calminer DATABASE_DRIVER=postgresql
DATABASE_HOST=localhost
DATABASE_PORT=5432
DATABASE_USER=<user>
DATABASE_PASSWORD=<password>
DATABASE_NAME=calminer
# Optional: set a schema (comma-separated for multiple entries)
# DATABASE_SCHEMA=public
# Legacy fallback (still supported, but granular settings are preferred)
# DATABASE_URL=postgresql://<user>:<password>@localhost:5432/calminer

View File

@@ -0,0 +1,59 @@
name: Build and Push Docker Image
on:
push:
branches:
- main
jobs:
build-and-push:
runs-on: ubuntu-latest
env:
DEFAULT_BRANCH: main
REGISTRY_ORG: allucanget
REGISTRY_IMAGE_NAME: calminer
REGISTRY_URL: ${{ secrets.REGISTRY_URL }}
REGISTRY_USERNAME: ${{ secrets.REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.REGISTRY_PASSWORD }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Collect workflow metadata
id: meta
shell: bash
run: |
ref_name="${GITHUB_REF_NAME:-${GITHUB_REF##*/}}"
event_name="${GITHUB_EVENT_NAME:-}"
sha="${GITHUB_SHA:-}"
if [ "$ref_name" = "${DEFAULT_BRANCH:-main}" ]; then
echo "on_default=true" >> "$GITHUB_OUTPUT"
else
echo "on_default=false" >> "$GITHUB_OUTPUT"
fi
echo "ref_name=$ref_name" >> "$GITHUB_OUTPUT"
echo "event_name=$event_name" >> "$GITHUB_OUTPUT"
echo "sha=$sha" >> "$GITHUB_OUTPUT"
- name: Set up QEMU and Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Gitea registry
if: ${{ steps.meta.outputs.on_default == 'true' }}
uses: docker/login-action@v3
continue-on-error: true
with:
registry: ${{ env.REGISTRY_URL }}
username: ${{ env.REGISTRY_USERNAME }}
password: ${{ env.REGISTRY_PASSWORD }}
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: .
file: Dockerfile
push: ${{ steps.meta.outputs.on_default == 'true' && steps.meta.outputs.event_name != 'pull_request' && (env.REGISTRY_URL != '' && env.REGISTRY_USERNAME != '' && env.REGISTRY_PASSWORD != '') }}
tags: |
${{ env.REGISTRY_URL }}/${{ env.REGISTRY_ORG }}/${{ env.REGISTRY_IMAGE_NAME }}:latest
${{ env.REGISTRY_URL }}/${{ env.REGISTRY_ORG }}/${{ env.REGISTRY_IMAGE_NAME }}:${{ steps.meta.outputs.sha }}

View File

@@ -0,0 +1,36 @@
name: Deploy to Server
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
env:
DEFAULT_BRANCH: main
REGISTRY_ORG: allucanget
REGISTRY_IMAGE_NAME: calminer
REGISTRY_URL: ${{ secrets.REGISTRY_URL }}
REGISTRY_USERNAME: ${{ secrets.REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.REGISTRY_PASSWORD }}
steps:
- name: SSH and deploy
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.SSH_HOST }}
username: ${{ secrets.SSH_USERNAME }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
docker pull ${{ env.REGISTRY_URL }}/${{ env.REGISTRY_ORG }}/${{ env.REGISTRY_IMAGE_NAME }}:latest
docker stop calminer || true
docker rm calminer || true
docker run -d --name calminer -p 8000:8000 \
-e DATABASE_DRIVER=${{ secrets.DATABASE_DRIVER }} \
-e DATABASE_HOST=${{ secrets.DATABASE_HOST }} \
-e DATABASE_PORT=${{ secrets.DATABASE_PORT }} \
-e DATABASE_USER=${{ secrets.DATABASE_USER }} \
-e DATABASE_PASSWORD=${{ secrets.DATABASE_PASSWORD }} \
-e DATABASE_NAME=${{ secrets.DATABASE_NAME }} \
-e DATABASE_SCHEMA=${{ secrets.DATABASE_SCHEMA }} \
${{ secrets.REGISTRY_URL }}/${{ secrets.REGISTRY_USERNAME }}/calminer:latest

125
.gitea/workflows/test.yml Normal file
View File

@@ -0,0 +1,125 @@
name: Run Tests
on: [push]
jobs:
test:
services:
postgres:
image: postgres:16-alpine
env:
POSTGRES_DB: calminer_ci
POSTGRES_USER: calminer
POSTGRES_PASSWORD: secret
ports:
- 5432:5432
options: >-
--health-cmd "pg_isready -U calminer -d calminer_ci"
--health-interval 10s
--health-timeout 5s
--health-retries 10
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Configure apt proxy
run: |
set -euo pipefail
PROXY_HOST="http://apt-cacher:3142"
if ! curl -fsS --connect-timeout 3 "${PROXY_HOST}" >/dev/null; then
PROXY_HOST="http://192.168.88.14:3142"
fi
echo "Using APT proxy ${PROXY_HOST}"
echo "http_proxy=${PROXY_HOST}" >> "$GITHUB_ENV"
echo "https_proxy=${PROXY_HOST}" >> "$GITHUB_ENV"
echo "HTTP_PROXY=${PROXY_HOST}" >> "$GITHUB_ENV"
echo "HTTPS_PROXY=${PROXY_HOST}" >> "$GITHUB_ENV"
sudo tee /etc/apt/apt.conf.d/01proxy >/dev/null <<EOF
Acquire::http::Proxy "${PROXY_HOST}";
Acquire::https::Proxy "${PROXY_HOST}";
EOF
# - name: Cache pip
# uses: actions/cache@v4
# with:
# path: ~/.cache/pip
# key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt', 'requirements-test.txt') }}
# restore-keys: |
# ${{ runner.os }}-pip-${{ hashFiles('requirements.txt') }}
# ${{ runner.os }}-pip-
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install -r requirements-test.txt
- name: Install Playwright browsers
run: |
python -m playwright install --with-deps
- name: Wait for database service
env:
DATABASE_DRIVER: postgresql
DATABASE_HOST: postgres
DATABASE_PORT: "5432"
DATABASE_NAME: calminer_ci
DATABASE_USER: calminer
DATABASE_PASSWORD: secret
DATABASE_SCHEMA: public
DATABASE_SUPERUSER: calminer
DATABASE_SUPERUSER_PASSWORD: secret
DATABASE_SUPERUSER_DB: calminer_ci
run: |
python - <<'PY'
import os
import time
import psycopg2
dsn = (
f"dbname={os.environ['DATABASE_SUPERUSER_DB']} "
f"user={os.environ['DATABASE_SUPERUSER']} "
f"password={os.environ['DATABASE_SUPERUSER_PASSWORD']} "
f"host={os.environ['DATABASE_HOST']} "
f"port={os.environ['DATABASE_PORT']}"
)
for attempt in range(30):
try:
with psycopg2.connect(dsn):
break
except psycopg2.OperationalError:
time.sleep(2)
else:
raise SystemExit("Postgres service did not become available")
PY
- name: Run database setup (dry run)
env:
DATABASE_DRIVER: postgresql
DATABASE_HOST: postgres
DATABASE_PORT: "5432"
DATABASE_NAME: calminer_ci
DATABASE_USER: calminer
DATABASE_PASSWORD: secret
DATABASE_SCHEMA: public
DATABASE_SUPERUSER: calminer
DATABASE_SUPERUSER_PASSWORD: secret
DATABASE_SUPERUSER_DB: calminer_ci
run: python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v
- name: Run database setup
env:
DATABASE_DRIVER: postgresql
DATABASE_HOST: postgres
DATABASE_PORT: "5432"
DATABASE_NAME: calminer_ci
DATABASE_USER: calminer
DATABASE_PASSWORD: secret
DATABASE_SCHEMA: public
DATABASE_SUPERUSER: calminer
DATABASE_SUPERUSER_PASSWORD: secret
DATABASE_SUPERUSER_DB: calminer_ci
run: python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v
- name: Run tests
env:
DATABASE_URL: postgresql+psycopg2://calminer:secret@postgres:5432/calminer_ci
DATABASE_SCHEMA: public
run: pytest

5
.gitignore vendored
View File

@@ -16,6 +16,9 @@ env/
# environment variables # environment variables
.env .env
*.env
# except example files
!config/*.env.example
# github instruction files # github instruction files
.github/instructions/ .github/instructions/
@@ -41,4 +44,4 @@ logs/
# SQLite database # SQLite database
*.sqlite3 *.sqlite3
test.db test*.db

35
Dockerfile Normal file
View File

@@ -0,0 +1,35 @@
# Multi-stage Dockerfile to keep final image small
FROM python:3.10-slim AS builder
# Install build-time packages and Python dependencies in one layer
WORKDIR /app
COPY requirements.txt /app/requirements.txt
RUN echo 'Acquire::http::Proxy "http://192.168.88.14:3142";' > /etc/apt/apt.conf.d/90proxy
RUN apt-get update \
&& apt-get install -y --no-install-recommends build-essential gcc libpq-dev \
&& python -m pip install --upgrade pip \
&& pip install --no-cache-dir --prefix=/install -r /app/requirements.txt \
&& apt-get purge -y --auto-remove build-essential gcc \
&& rm -rf /var/lib/apt/lists/*
FROM python:3.10-slim
WORKDIR /app
# Copy installed packages from builder
COPY --from=builder /install /usr/local
# Assume environment variables for DB config will be set at runtime
# ENV DATABASE_HOST=your_db_host
# ENV DATABASE_PORT=your_db_port
# ENV DATABASE_NAME=your_db_name
# ENV DATABASE_USER=your_db_user
# ENV DATABASE_PASSWORD=your_db_password
# Copy application code
COPY . /app
# Expose service port
EXPOSE 8000
# Run the FastAPI app with uvicorn
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

View File

@@ -26,8 +26,61 @@ A range of features are implemented to support these functionalities.
## Documentation & quickstart ## Documentation & quickstart
This repository contains detailed developer and architecture documentation in the `docs/` folder. For a short quickstart, troubleshooting notes, migration/backfill instructions and the current implementation status, see [quickstart](docs/quickstart.md). This repository contains detailed developer and architecture documentation in the `docs/` folder.
[Quickstart](docs/quickstart.md) contains developer quickstart, migrations, testing and current status.
Key architecture documents: see [architecture](docs/architecture/README.md) for the arc42-based architecture documentation. Key architecture documents: see [architecture](docs/architecture/README.md) for the arc42-based architecture documentation.
For contributors: the `routes/`, `models/` and `services/` folders contain the primary application code. Tests and E2E specs are in `tests/`. For contributors: the `routes/`, `models/` and `services/` folders contain the primary application code. Tests and E2E specs are in `tests/`.
## Run with Docker
The repository ships with a multi-stage `Dockerfile` that produces a slim runtime image.
### Build container
```powershell
# Build the image locally
docker build -t calminer:latest .
```
### Push to registry
```powershell
# Tag and push the image to your registry
docker login your-registry.com -u your-username -p your-password
docker tag calminer:latest your-registry.com/your-namespace/calminer:latest
docker push your-registry.com/your-namespace/calminer:latest
```
### Run container
Expose FastAPI on <http://localhost:8000> with database configuration via granular environment variables:
```powershell
# Provide database configuration via granular environment variables
docker run --rm -p 8000:8000 ^
-e DATABASE_DRIVER="postgresql" ^
-e DATABASE_HOST="db.host" ^
-e DATABASE_PORT="5432" ^
-e DATABASE_USER="calminer" ^
-e DATABASE_PASSWORD="s3cret" ^
-e DATABASE_NAME="calminer" ^
-e DATABASE_SCHEMA="public" ^
calminer:latest
```
### Orchestrated Deployment
Use `docker compose` or an orchestrator of your choice to co-locate PostgreSQL/Redis alongside the app when needed. The image expects migrations to be applied before startup.
## CI/CD expectations
CalMiner uses Gitea Actions workflows stored in `.gitea/workflows/`:
- `test.yml` runs style/unit/e2e suites on every push with cached Python dependencies.
- `build-and-push.yml` builds the Docker image, reuses cached layers, and pushes to the configured registry.
- `deploy.yml` pulls the pushed image on the target host and restarts the container.
Pipelines assume the following secrets are provisioned in the Gitea instance: `REGISTRY_USERNAME`, `REGISTRY_PASSWORD`, `REGISTRY_URL`, `SSH_HOST`, `SSH_USERNAME`, and `SSH_PRIVATE_KEY`.

View File

@@ -3,10 +3,55 @@ from sqlalchemy.orm import declarative_base, sessionmaker
import os import os
from dotenv import load_dotenv from dotenv import load_dotenv
load_dotenv() load_dotenv()
DATABASE_URL = os.environ.get("DATABASE_URL")
if not DATABASE_URL:
raise RuntimeError("DATABASE_URL environment variable is not set") def _build_database_url() -> str:
"""Construct the SQLAlchemy database URL from granular environment vars.
Falls back to `DATABASE_URL` for backward compatibility.
"""
legacy_url = os.environ.get("DATABASE_URL", "")
if legacy_url and legacy_url.strip() != "":
return legacy_url
driver = os.environ.get("DATABASE_DRIVER", "postgresql")
host = os.environ.get("DATABASE_HOST")
port = os.environ.get("DATABASE_PORT", "5432")
user = os.environ.get("DATABASE_USER")
password = os.environ.get("DATABASE_PASSWORD")
database = os.environ.get("DATABASE_NAME")
schema = os.environ.get("DATABASE_SCHEMA", "public")
missing = [
var_name
for var_name, value in (
("DATABASE_HOST", host),
("DATABASE_USER", user),
("DATABASE_NAME", database),
)
if not value
]
if missing:
raise RuntimeError(
"Missing database configuration: set DATABASE_URL or provide "
f"granular variables ({', '.join(missing)})"
)
url = f"{driver}://{user}:{password}@{host}"
if port:
url += f":{port}"
url += f"/{database}"
if schema:
url += f"?options=-csearch_path={schema}"
return str(url)
DATABASE_URL = _build_database_url()
engine = create_engine(DATABASE_URL, echo=True, future=True) engine = create_engine(DATABASE_URL, echo=True, future=True)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

View File

@@ -0,0 +1,11 @@
# Sample environment configuration for staging deployment
DATABASE_HOST=staging-db.internal
DATABASE_PORT=5432
DATABASE_NAME=calminer_staging
DATABASE_USER=calminer_app
DATABASE_PASSWORD=<app-password>
# Admin connection used for provisioning database and roles
DATABASE_SUPERUSER=postgres
DATABASE_SUPERUSER_PASSWORD=<admin-password>
DATABASE_SUPERUSER_DB=postgres

View File

@@ -0,0 +1,14 @@
# Sample environment configuration for running scripts/setup_database.py against a test instance
DATABASE_DRIVER=postgresql
DATABASE_HOST=postgres
DATABASE_PORT=5432
DATABASE_NAME=calminer_test
DATABASE_USER=calminer_test
DATABASE_PASSWORD=<test-password>
# optional: specify schema if different from 'public'
#DATABASE_SCHEMA=public
# Admin connection used for provisioning database and roles
DATABASE_SUPERUSER=postgres
DATABASE_SUPERUSER_PASSWORD=<superuser-password>
DATABASE_SUPERUSER_DB=postgres

View File

@@ -0,0 +1,23 @@
version: "3.9"
services:
postgres:
image: postgres:16-alpine
container_name: calminer_postgres_local
restart: unless-stopped
environment:
POSTGRES_DB: calminer_local
POSTGRES_USER: calminer
POSTGRES_PASSWORD: secret
ports:
- "5433:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U calminer -d calminer_local"]
interval: 10s
timeout: 5s
retries: 10
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:

View File

@@ -1,12 +1,30 @@
---
title: "01 — Introduction and Goals"
description: "System purpose, stakeholders, and high-level goals; project introduction and business/technical goals."
status: draft
---
# 01 — Introduction and Goals # 01 — Introduction and Goals
Status: skeleton ## Purpose
Describe the system purpose, stakeholders, and high-level goals. Fill this file with project introduction and business/technical goals. CalMiner aims to provide a comprehensive platform for mining project scenario analysis, enabling stakeholders to make informed decisions based on data-driven insights.
## Overview ## Stakeholders
CalMiner is a FastAPI application that collects mining project inputs, persists scenario-specific records, and surfaces aggregated insights. The platform targets Monte Carlo driven planning, with deterministic CRUD features in place and simulation logic staged for future work. - **Project Managers**: Require tools for scenario planning and risk assessment.
- **Data Analysts**: Need access to historical data and simulation results for analysis.
- **Executives**: Seek high-level insights and reporting for strategic decision-making.
## High-Level Goals
1. **Comprehensive Scenario Analysis**: Enable users to create and analyze multiple project scenarios to assess risks and opportunities.
2. **Data-Driven Decision Making**: Provide stakeholders with the insights needed to make informed decisions based on simulation results.
3. **User-Friendly Interface**: Ensure the platform is accessible and easy to use for all stakeholders, regardless of technical expertise.
## System Overview
FastAPI application that collects mining project inputs, persists scenario-specific records, and surfaces aggregated insights. The platform targets Monte Carlo driven planning, with deterministic CRUD features in place and simulation logic staged for future work.
Frontend components are server-rendered Jinja2 templates, with Chart.js powering the dashboard visualization. The backend leverages SQLAlchemy for ORM mapping to a PostgreSQL database. Frontend components are server-rendered Jinja2 templates, with Chart.js powering the dashboard visualization. The backend leverages SQLAlchemy for ORM mapping to a PostgreSQL database.
@@ -20,7 +38,7 @@ Frontend components are server-rendered Jinja2 templates, with Chart.js powering
### Current implementation status (summary) ### Current implementation status (summary)
- Currency normalization, simulation scaffold, and reporting service exist; see [quickstart](docs/quickstart.md) for full status and migration instructions. - Currency normalization, simulation scaffold, and reporting service exist; see [quickstart](../quickstart.md) for full status and migration instructions.
## MVP Features (migrated) ## MVP Features (migrated)

View File

@@ -1,5 +1,175 @@
---
title: "02 — Architecture Constraints"
description: "Document imposed constraints: technical, organizational, regulatory, and environmental constraints that affect architecture decisions."
status: skeleton
---
# 02 — Architecture Constraints # 02 — Architecture Constraints
Status: skeleton ## Technical Constraints
Document imposed constraints: technical, organizational, regulatory, and environmental constraints that affect architecture decisions. > e.g., choice of FastAPI, PostgreSQL, SQLAlchemy, Chart.js, Jinja2 templates.
The architecture of CalMiner is influenced by several technical constraints that shape its design and implementation:
1. **Framework Selection**: The choice of FastAPI as the web framework imposes constraints on how the application handles requests, routing, and middleware. FastAPI's asynchronous capabilities must be leveraged appropriately to ensure optimal performance.
2. **Database Technology**: The use of PostgreSQL as the primary database system dictates the data modeling, querying capabilities, and transaction management strategies. SQLAlchemy ORM is used for database interactions, which requires adherence to its conventions and limitations.
3. **Frontend Technologies**: The decision to use Jinja2 for server-side templating and Chart.js for data visualization influences the structure of the frontend code and the way dynamic content is rendered.
4. **Simulation Logic**: The Monte Carlo simulation logic must be designed to efficiently handle large datasets and perform computations within the constraints of the chosen programming language (Python) and its libraries.
## Organizational Constraints
> e.g., team skillsets, development workflows, CI/CD pipelines.
Restrictions arising from organizational factors include:
1. **Team Expertise**: The development teams familiarity with FastAPI, SQLAlchemy, and frontend technologies like Jinja2 and Chart.js influences the architecture choices to ensure maintainability and ease of development.
2. **Development Processes**: The adoption of Agile methodologies and CI/CD pipelines (using Gitea Actions) shapes the architecture to support continuous integration, automated testing, and deployment practices.
3. **Collaboration Tools**: The use of specific collaboration and version control tools (e.g., Gitea) affects how code is managed, reviewed, and integrated, impacting the overall architecture and development workflow.
4. **Documentation Standards**: The requirement for comprehensive documentation (as seen in the `docs/` folder) necessitates an architecture that is well-structured and easy to understand for both current and future team members.
5. **Knowledge Sharing**: The need for effective knowledge sharing and onboarding processes influences the architecture to ensure that it is accessible and understandable for new team members.
6. **Resource Availability**: The availability of hardware, software, and human resources within the organization can impose constraints on the architecture, affecting decisions related to scalability, performance, and feature implementation.
## Regulatory Constraints
> e.g., data privacy laws, industry standards.
Regulatory constraints that impact the architecture of CalMiner include:
1. **Data Privacy Compliance**: The architecture must ensure compliance with data privacy regulations such as GDPR or CCPA, which may dictate how user data is collected, stored, and processed.
2. **Industry Standards**: Adherence to industry-specific standards and best practices may influence the design of data models, security measures, and reporting functionalities.
3. **Auditability**: The system may need to incorporate logging and auditing features to meet regulatory requirements, affecting the architecture of data storage and access controls.
4. **Data Retention Policies**: Regulatory requirements regarding data retention and deletion may impose constraints on how long certain types of data can be stored, influencing database design and data lifecycle management.
5. **Security Standards**: Compliance with security standards (e.g., ISO/IEC 27001) may necessitate the implementation of specific security measures, such as encryption, access controls, and vulnerability management, which impact the overall architecture.
## Environmental Constraints
> e.g., deployment environments, cloud provider limitations.
Environmental constraints affecting the architecture include:
1. **Deployment Environments**: The architecture must accommodate various deployment environments (development, testing, production) with differing configurations and resource allocations.
2. **Cloud Provider Limitations**: If deployed on a specific cloud provider, the architecture may need to align with the provider's services, limitations, and best practices, such as using managed databases or specific container orchestration tools.
3. **Containerization**: The use of Docker for containerization imposes constraints on how the application is packaged, deployed, and scaled, influencing the architecture to ensure compatibility with container orchestration platforms.
4. **Scalability Requirements**: The architecture must be designed to scale efficiently based on anticipated load and usage patterns, considering the limitations of the chosen infrastructure.
## Performance Constraints
> e.g., response time requirements, scalability needs.
Current performance constraints include:
1. **Response Time Requirements**: The architecture must ensure that the system can respond to user requests within a specified time frame, which may impact design decisions related to caching, database queries, and API performance.
2. **Scalability Needs**: The system should be able to handle increased load and user traffic without significant degradation in performance, necessitating a scalable architecture that can grow with demand.
## Security Constraints
> e.g., authentication mechanisms, data encryption standards.
## Budgetary Constraints
> e.g., licensing costs, infrastructure budgets.
## Time Constraints
> e.g., project deadlines, release schedules.
## Interoperability Constraints
> e.g., integration with existing systems, third-party services.
## Maintainability Constraints
> e.g., code modularity, documentation standards.
## Usability Constraints
> e.g., user interface design principles, accessibility requirements.
## Data Constraints
> e.g., data storage formats, data retention policies.
## Deployment Constraints
> e.g., deployment environments, cloud provider limitations.
## Testing Constraints
> e.g., testing frameworks, test coverage requirements.
## Localization Constraints
> e.g., multi-language support, regional settings.
## Versioning Constraints
> e.g., API versioning strategies, backward compatibility.
## Monitoring Constraints
> e.g., logging standards, performance monitoring tools.
## Backup and Recovery Constraints
> e.g., data backup frequency, disaster recovery plans.
## Development Constraints
> e.g., coding languages, frameworks, libraries to be used or avoided.
## Collaboration Constraints
> e.g., communication tools, collaboration platforms.
## Documentation Constraints
> e.g., documentation tools, style guides.
## Training Constraints
> e.g., training programs, skill development initiatives.
## Support Constraints
> e.g., support channels, response time expectations.
## Legal Constraints
> e.g., compliance requirements, intellectual property considerations.
## Ethical Constraints
> e.g., ethical considerations in data usage, user privacy.
## Environmental Impact Constraints
> e.g., energy consumption considerations, sustainability goals.
## Innovation Constraints
> e.g., limitations on adopting new technologies, risk tolerance for experimentation.
## Cultural Constraints
> e.g., organizational culture, team dynamics affecting development practices.
## Stakeholder Constraints
> e.g., stakeholder expectations, communication preferences.
## Change Management Constraints
> e.g., processes for handling changes, version control practices.
## Resource Constraints
> e.g., availability of hardware, software, and human resources.
## Process Constraints
> e.g., development methodologies (Agile, Scrum), project management tools.
## Quality Constraints
> e.g., code quality standards, testing requirements.

View File

@@ -1,5 +1,57 @@
---
title: "03 — Context and Scope"
description: "Describe system context, external actors, and the scope of the architecture."
status: draft
---
# 03 — Context and Scope # 03 — Context and Scope
Status: skeleton ## System Context
Describe system context, external actors, and the scope of the architecture. The CalMiner system operates within the context of mining project management, providing tools for scenario analysis and decision support. It interacts with various data sources, including historical project data and real-time operational metrics.
## External Actors
- **Project Managers**: Utilize the platform for scenario planning and risk assessment.
- **Data Analysts**: Analyze simulation results and derive insights.
- **Executives**: Review high-level reports and dashboards for strategic decision-making.
## Scope of the Architecture
The architecture encompasses the following key areas:
1. **Data Ingestion**: Mechanisms for collecting and processing data from various sources.
2. **Data Storage**: Solutions for storing and managing historical and real-time data.
3. **Simulation Engine**: Core algorithms and models for scenario analysis.
3.1. **Modeling Framework**: Tools for defining and managing simulation models.
3.2. **Parameter Management**: Systems for handling input parameters and configurations.
3.3. **Execution Engine**: Infrastructure for running simulations and processing results.
3.4. **Result Storage**: Systems for storing simulation outputs for analysis and reporting.
4. **Financial Reporting**: Tools for generating reports and visualizations based on simulation outcomes.
5. **Risk Assessment**: Frameworks for identifying and evaluating potential project risks.
6. **Profitability Analysis**: Modules for calculating and analyzing project profitability metrics.
7. **User Interface**: Design and implementation of the user-facing components of the system.
8. **Security and Compliance**: Measures to ensure data security and regulatory compliance.
9. **Scalability and Performance**: Strategies for ensuring the system can handle increasing data volumes and user loads.
10. **Integration Points**: Interfaces for integrating with external systems and services.
11. **Monitoring and Logging**: Systems for tracking system performance and user activity.
12. **Maintenance and Support**: Processes for ongoing system maintenance and user support.
## Diagram
```mermaid
sequenceDiagram
participant PM as Project Manager
participant DA as Data Analyst
participant EX as Executive
participant CM as CalMiner System
PM->>CM: Create and manage scenarios
DA->>CM: Analyze simulation results
EX->>CM: Review reports and dashboards
CM->>PM: Provide scenario planning tools
CM->>DA: Deliver analysis insights
CM->>EX: Generate high-level reports
```
This diagram illustrates the key components of the CalMiner system and their interactions with external actors.

View File

@@ -1,20 +1,49 @@
---
title: "04 — Solution Strategy"
description: "High-level solution strategy describing major approaches, technology choices, and trade-offs."
status: draft
---
# 04 — Solution Strategy # 04 — Solution Strategy
Status: skeleton This section outlines the high-level solution strategy for implementing the CalMiner system, focusing on major approaches, technology choices, and trade-offs.
High-level solution strategy describing major approaches, technology choices, and trade-offs. ## Client-Server Architecture
## Monte Carlo engine & persistence - **Backend**: FastAPI serves as the backend framework, providing RESTful APIs for data management, simulation execution, and reporting. It leverages SQLAlchemy for ORM-based database interactions with PostgreSQL.
- **Frontend**: Server-rendered Jinja2 templates deliver dynamic HTML views, enhanced with Chart.js for interactive data visualizations. This approach balances performance and simplicity, avoiding the complexity of a full SPA.
- **Middleware**: Custom middleware handles JSON validation to ensure data integrity before processing requests.
- **Monte Carlo engine**: `services/simulation.py` will incorporate stochastic sampling (e.g., NumPy, SciPy) to populate `simulation_result` and feed reporting. ## Technology Choices
- **Persistence of simulation results**: plan to extend `/api/simulations/run` to persist iterations to `models/simulation_result` and provide a retrieval endpoint for historical runs.
## Simulation Roadmap - **FastAPI**: Chosen for its high performance, ease of use, and modern features like async support and automatic OpenAPI documentation.
- **PostgreSQL**: Selected for its robustness, scalability, and support for complex queries, making it suitable for handling the diverse data needs of mining project management.
- **SQLAlchemy**: Provides a flexible and powerful ORM layer, facilitating database interactions while maintaining code readability and maintainability.
- **Chart.js**: Utilized for its simplicity and effectiveness in rendering interactive charts, enhancing the user experience on the dashboard.
- **Jinja2**: Enables server-side rendering of HTML templates, allowing for dynamic content generation while keeping the frontend lightweight.
- **Pydantic**: Used for data validation and serialization, ensuring that incoming request payloads conform to expected schemas.
- **Docker**: Employed for containerization, ensuring consistent deployment across different environments and simplifying dependency management.
- **Redis**: Used as an in-memory data store to cache frequently accessed data, improving application performance and reducing database load.
- Implement stochastic sampling in `services/simulation.py` (e.g., NumPy random draws based on parameter distributions). ## Trade-offs
- Store iterations in `models/simulation_result.py` via `/api/simulations/run`.
- Feed persisted results into reporting for downstream analytics and historical comparisons.
### Status update (2025-10-21) - **Server-Rendered vs. SPA**: Opted for server-rendered templates over a single-page application (SPA) to reduce complexity and improve initial load times, at the cost of some interactivity.
- **Synchronous vs. Asynchronous**: While FastAPI supports async operations, the initial implementation focuses on synchronous request handling for simplicity, with plans to introduce async features as needed.
- **Monolithic vs. Microservices**: The initial architecture follows a monolithic approach for ease of development and deployment, with the possibility of refactoring into microservices as the system scales.
- **In-Memory Caching**: Implementing Redis for caching introduces additional infrastructure complexity but significantly enhances performance for read-heavy operations.
- **Database Choice**: PostgreSQL was chosen over NoSQL alternatives due to the structured nature of the data and the need for complex querying capabilities, despite potential scalability challenges.
- **Technology Familiarity**: Selected technologies align with the team's existing skill set to minimize the learning curve and accelerate development, even if some alternatives may offer marginally better performance or features.
- **Extensibility vs. Simplicity**: The architecture is designed to be extensible for future features (e.g., Monte Carlo simulation engine) while maintaining simplicity in the initial implementation to ensure timely delivery of core functionalities.
- A scaffolded simulation service (`services/simulation.py`) and `/api/simulations/run` route exist and return in-memory results. Persisting those iterations to `models/simulation_result` is scheduled for a follow-up change. ## Future Considerations
- **Scalability**: As the user base grows, consider transitioning to a microservices architecture and implementing load balancing strategies.
- **Asynchronous Processing**: Introduce asynchronous task queues (e.g., Celery) for long-running simulations to improve responsiveness.
- **Enhanced Frontend**: Explore the possibility of integrating a frontend framework (e.g., React or Vue.js) for more dynamic user interactions in future iterations.
- **Advanced Analytics**: Plan for integrating advanced analytics and machine learning capabilities to enhance simulation accuracy and reporting insights.
- **Security Enhancements**: Implement robust authentication and authorization mechanisms to protect sensitive data and ensure compliance with industry standards.
- **Continuous Integration/Continuous Deployment (CI/CD)**: Establish CI/CD pipelines to automate testing, building, and deployment processes for faster and more reliable releases.
- **Monitoring and Logging**: Integrate monitoring tools (e.g., Prometheus, Grafana) and centralized logging solutions (e.g., ELK stack) to track application performance and troubleshoot issues effectively.
- **User Feedback Loop**: Implement mechanisms for collecting user feedback to inform future development priorities and improve user experience.
- **Documentation**: Maintain comprehensive documentation for both developers and end-users to facilitate onboarding and effective use of the system.
- **Testing Strategy**: Develop a robust testing strategy, including unit, integration, and end-to-end tests, to ensure code quality and reliability as the system evolves.

View File

@@ -1,4 +1,4 @@
# Implementation Plan (extended) # Implementation Plan 2025-10-20
This file contains the implementation plan (MVP features, steps, and estimates). This file contains the implementation plan (MVP features, steps, and estimates).
@@ -6,7 +6,7 @@ This file contains the implementation plan (MVP features, steps, and estimates).
1. Connect to PostgreSQL database with schema `calminer`. 1. Connect to PostgreSQL database with schema `calminer`.
1. Create and activate a virtual environment and install dependencies via `requirements.txt`. 1. Create and activate a virtual environment and install dependencies via `requirements.txt`.
1. Define environment variables in `.env`, including `DATABASE_URL`. 1. Define database environment variables in `.env` (e.g., `DATABASE_DRIVER`, `DATABASE_HOST`, `DATABASE_PORT`, `DATABASE_USER`, `DATABASE_PASSWORD`, `DATABASE_NAME`, optional `DATABASE_SCHEMA`).
1. Configure FastAPI entrypoint in `main.py` to include routers. 1. Configure FastAPI entrypoint in `main.py` to include routers.
## Feature: Scenario Management ## Feature: Scenario Management
@@ -107,23 +107,4 @@ This file contains the implementation plan (MVP features, steps, and estimates).
1. Write unit tests in `tests/unit/test_reporting.py`. 1. Write unit tests in `tests/unit/test_reporting.py`.
1. Enhance UI in `components/Dashboard.html` with charts. 1. Enhance UI in `components/Dashboard.html` with charts.
## MVP Feature Analysis (summary) See [UI and Style](../13_ui_and_style.md) for the UI template audit, layout guidance, and next steps.
Goal: Identify core MVP features, acceptance criteria, and quick estimates.
### Edge cases to consider
- Large simulation runs (memory / timeouts) — use streaming, chunking, or background workers.
- DB migration and schema versioning.
- Authentication/authorization for scenario access.
### Next actionable items
1. Break Scenario Management into sub-issues (models, routes, tests, simple UI).
1. Scaffold Parameter Input & Validation (models/parameters.py, middleware, routes, tests).
1. Prototype the simulation engine with a small deterministic runner and unit tests.
1. Scaffold Monte Carlo Simulation endpoints (`services/simulation.py`, `routes/simulations.py`, tests).
1. Scaffold Reporting endpoints (`services/reporting.py`, `routes/reporting.py`, front-end Dashboard, tests).
1. Add CI job for tests and coverage.
See [UI and Style](docs/architecture/13_ui_and_style.md) for the UI template audit, layout guidance, and next steps.

View File

@@ -1,16 +1,40 @@
---
title: "05 — Building Block View"
description: "Explain the static structure: modules, components, services and their relationships."
status: draft
---
# 05 — Building Block View # 05 — Building Block View
Status: skeleton ## Architecture overview
Explain the static structure: modules, components, services and their relationships. This overview complements [architecture](README.md) with a high-level map of CalMiner's module layout and request flow.
Refer to the detailed architecture chapters in `docs/architecture/`:
- Module map & components: [Building Block View](05_building_block_view.md)
- Request flow & runtime interactions: [Runtime View](06_runtime_view.md)
- Simulation roadmap & strategy: [Solution Strategy](04_solution_strategy.md)
## System Components ## System Components
- **FastAPI backend** (`main.py`, `routes/`): hosts REST endpoints for scenarios, parameters, costs, consumption, production, equipment, maintenance, simulations, and reporting. Each router encapsulates request/response schemas and DB access patterns, leveraging a shared dependency module (`routes/dependencies.get_db`) for SQLAlchemy session management. ### Backend
- **Service layer** (`services/`): houses business logic. `services/reporting.py` produces statistical summaries, while `services/simulation.py` provides the Monte Carlo integration point.
- **Persistence** (`models/`, `config/database.py`): SQLAlchemy models map to PostgreSQL tables. Relationships connect scenarios to derived domain entities. - **FastAPI application** (`main.py`): entry point that configures routers, middleware, and startup/shutdown events.
- **Presentation** (`templates/`, `components/`): server-rendered views extend a shared `base.html` layout with a persistent left sidebar, pull global styles from `static/css/main.css`, and surface data entry (scenario and parameter forms) alongside the Chart.js-powered dashboard. - **Routers** (`routes/`): modular route handlers for scenarios, parameters, costs, consumption, production, equipment, maintenance, simulations, and reporting. Each router defines RESTful endpoints, request/response schemas, and orchestrates service calls.
- **Reusable partials** (`templates/partials/components.html`): macro library that standardises select inputs, feedback/empty states, and table wrappers so pages remain consistent while keeping DOM hooks stable for existing JavaScript modules. - leveraging a shared dependency module (`routes/dependencies.get_db`) for SQLAlchemy session management.
- **Models** (`models/`): SQLAlchemy ORM models representing database tables and relationships, encapsulating domain entities like Scenario, CapEx, OpEx, Consumption, ProductionOutput, Equipment, Maintenance, and SimulationResult.
- **Services** (`services/`): business logic layer that processes data, performs calculations, and interacts with models. Key services include reporting calculations and Monte Carlo simulation scaffolding.
- **Database** (`config/database.py`): sets up the SQLAlchemy engine and session management for PostgreSQL interactions.
### Frontend
- **Templates** (`templates/`): Jinja2 templates for server-rendered HTML views, extending a shared base layout with a persistent sidebar for navigation.
- **Static Assets** (`static/`): CSS and JavaScript files for styling and interactivity. Shared CSS variables in `static/css/main.css` define the color palette, while page-specific JS modules in `static/js/` handle dynamic behaviors.
- **Reusable partials** (`templates/partials/components.html`): macro library that standardises select inputs, feedback/empty states, and table wrappers so pages remain consistent while keeping DOM hooks stable for existing JavaScript modules.
### Middleware & Utilities
- **Middleware** (`middleware/validation.py`): applies JSON validation before requests reach routers. - **Middleware** (`middleware/validation.py`): applies JSON validation before requests reach routers.
- **Testing** (`tests/unit/`): pytest suite covering route and service behavior, including UI rendering checks and negative-path router validation tests to ensure consistent HTTP error semantics. Playwright end-to-end coverage is planned for core smoke flows (dashboard load, scenario inputs, reporting) and will attach in CI once scaffolding is completed. - **Testing** (`tests/unit/`): pytest suite covering route and service behavior, including UI rendering checks and negative-path router validation tests to ensure consistent HTTP error semantics. Playwright end-to-end coverage is planned for core smoke flows (dashboard load, scenario inputs, reporting) and will attach in CI once scaffolding is completed.
@@ -20,15 +44,14 @@ Explain the static structure: modules, components, services and their relationsh
- `capex.py`, `opex.py`: financial expenditures tied to scenarios. - `capex.py`, `opex.py`: financial expenditures tied to scenarios.
- `consumption.py`, `production_output.py`: operational data tables. - `consumption.py`, `production_output.py`: operational data tables.
- `equipment.py`, `maintenance.py`: asset management models. - `equipment.py`, `maintenance.py`: asset management models.
- `simulation_result.py`: stores Monte Carlo iteration outputs.
## Architecture overview (migrated) ## Service Layer
This overview complements [architecture](docs/architecture/README.md) with a high-level map of CalMiner's module layout and request flow. - `reporting.py`: computes aggregates (count, min/max, mean, median, percentiles, standard deviation, variance, tail-risk metrics) from simulation results.
- `simulation.py`: scaffolds Monte Carlo simulation logic (currently in-memory; persistence planned).
Refer to the detailed architecture chapters in `docs/architecture/`: - `currency.py`: handles currency normalization for cost tables.
- `utils.py`: shared helper functions (e.g., statistical calculations).
- Module map & components: [Building Block View](docs/architecture/05_building_block_view.md) - `validation.py`: JSON schema validation middleware.
- Request flow & runtime interactions: [Runtime View](docs/architecture/06_runtime_view.md) - `database.py`: SQLAlchemy engine and session setup.
- Simulation roadmap & strategy: [Solution Strategy](docs/architecture/04_solution_strategy.md) - `dependencies.py`: FastAPI dependency injection for DB sessions.
Currency normalization and backfill tooling have been added (see [backfill_currency.py](scripts/backfill_currency.py) and related migrations) to support canonical currency lookups across cost tables.

View File

@@ -1,8 +1,179 @@
---
title: "06 — Runtime View"
description: "Describe runtime aspects: request flows, lifecycle of key interactions, and runtime components."
status: draft
---
# 06 — Runtime View # 06 — Runtime View
Status: skeleton ## Overview
Describe runtime aspects: request flows, lifecycle of key interactions, and runtime components. The runtime view focuses on the dynamic behavior of the CalMiner application during execution. It illustrates how various components interact to fulfill user requests, process data, and generate outputs. Key runtime scenarios include scenario management, parameter input handling, cost tracking, consumption tracking, production output recording, equipment management, maintenance logging, Monte Carlo simulations, and reporting.
## Request Flow
1. **User Interaction**: A user interacts with the web application through the UI, triggering actions such as creating a scenario, inputting parameters, or generating reports.
2. **API Request**: The frontend sends HTTP requests (GET, POST, PUT, DELETE) to the appropriate API endpoints defined in the `routes/` directory.
3. **Routing**: The FastAPI framework routes the incoming requests to the corresponding route handlers.
4. **Service Layer**: Route handlers invoke services from the `services/` directory to process the business logic.
5. **Database Interaction**: Services interact with the database via ORM models defined in the `models/` directory to perform CRUD operations.
6. **Response Generation**: After processing, services return data to the route handlers, which format the response (JSON or HTML) and send it back to the frontend.
7. **UI Update**: The frontend updates the UI based on the response, rendering new data or updating existing views.
8. **Reporting Pipeline**: For reporting, data is aggregated from various sources, processed to generate statistics, and presented in the dashboard using Chart.js.
9. **Monte Carlo Simulations**: Stochastic simulations are executed in the backend, generating probabilistic outcomes that are stored temporarily and used for risk analysis in reports.
10. **Error Handling**: Throughout the process, error handling mechanisms ensure that exceptions are caught and appropriate responses are sent back to the user.
Request flow diagram:
```mermaid
sequenceDiagram
participant User
participant Frontend
participant API
participant Service
participant Database
User->>Frontend: Interact with UI
Frontend->>API: Send HTTP Request
API->>Service: Route to Handler
Service->>Database: Perform CRUD Operation
Database-->>Service: Return Data
Service-->>API: Return Processed Data
API-->>Frontend: Send Response
Frontend-->>User: Update UI
participant Reporting
Service->>Reporting: Aggregate Data
Reporting-->>Service: Return Report Data
Service-->>API: Return Report Response
API-->>Frontend: Send Report Data
Frontend-->>User: Render Report
participant Simulation
Service->>Simulation: Execute Monte Carlo Simulation
Simulation-->>Service: Return Simulation Results
Service-->>API: Return Simulation Data
API-->>Frontend: Send Simulation Data
Frontend-->>User: Display Simulation Results
```
## Key Runtime Scenarios
### Scenario Management
1. User accesses the scenario list via the UI.
2. The frontend sends a GET request to `/api/scenarios`.
3. The `ScenarioService` retrieves scenarios from the database.
4. The response is rendered in the UI.
5. For scenario creation, the user submits a form, triggering a POST request to `/api/scenarios`, which the `ScenarioService` processes to create a new scenario in the database.
6. The UI updates to reflect the new scenario.
Scenario management diagram:
```mermaid
sequenceDiagram
participant User
participant Frontend
participant API
participant ScenarioService
participant Database
User->>Frontend: Access Scenario List
Frontend->>API: GET /api/scenarios
API->>ScenarioService: Route to Handler
ScenarioService->>Database: Retrieve Scenarios
Database-->>ScenarioService: Return Scenarios
ScenarioService-->>API: Return Scenario Data
API-->>Frontend: Send Response
Frontend-->>User: Render Scenario List
User->>Frontend: Submit New Scenario Form
Frontend->>API: POST /api/scenarios
API->>ScenarioService: Route to Handler
ScenarioService->>Database: Create New Scenario
Database-->>ScenarioService: Confirm Creation
ScenarioService-->>API: Return New Scenario Data
API-->>Frontend: Send Response
Frontend-->>User: Update UI with New Scenario
```
### Process Parameter Input
1. User navigates to the parameter input form.
2. The frontend fetches existing parameters via a GET request to `/api/parameters`.
3. The `ParameterService` retrieves parameters from the database.
4. The response is rendered in the UI.
5. For parameter updates, the user submits a form, triggering a PUT request to `/api/parameters/:id`, which the `ParameterService` processes to update the parameter in the database.
6. The UI updates to reflect the changes.
Parameter input diagram:
```mermaid
sequenceDiagram
participant User
participant Frontend
participant API
participant ParameterService
participant Database
User->>Frontend: Navigate to Parameter Input Form
Frontend->>API: GET /api/parameters
API->>ParameterService: Route to Handler
ParameterService->>Database: Retrieve Parameters
Database-->>ParameterService: Return Parameters
ParameterService-->>API: Return Parameter Data
API-->>Frontend: Send Response
Frontend-->>User: Render Parameter Form
User->>Frontend: Submit Parameter Update Form
Frontend->>API: PUT /api/parameters/:id
API->>ParameterService: Route to Handler
ParameterService->>Database: Update Parameter
Database-->>ParameterService: Confirm Update
ParameterService-->>API: Return Updated Parameter Data
API-->>Frontend: Send Response
Frontend-->>User: Update UI with Updated Parameter
```
### Cost Tracking
1. User accesses the cost tracking view.
2. The frontend sends a GET request to `/api/costs` to fetch existing cost records.
3. The `CostService` retrieves cost data from the database.
4. The response is rendered in the UI.
5. For cost updates, the user submits a form, triggering a PUT request to `/api/costs/:id`, which the `CostService` processes to update the cost record in the database.
6. The UI updates to reflect the changes.
Cost tracking diagram:
```mermaid
sequenceDiagram
participant User
participant Frontend
participant API
participant CostService
participant Database
User->>Frontend: Access Cost Tracking View
Frontend->>API: GET /api/costs
API->>CostService: Route to Handler
CostService->>Database: Retrieve Cost Records
Database-->>CostService: Return Cost Data
CostService-->>API: Return Cost Data
API-->>Frontend: Send Response
Frontend-->>User: Render Cost Tracking View
User->>Frontend: Submit Cost Update Form
Frontend->>API: PUT /api/costs/:id
API->>CostService: Route to Handler
CostService->>Database: Update Cost Record
Database-->>CostService: Confirm Update
CostService-->>API: Return Updated Cost Data
API-->>Frontend: Send Response
Frontend-->>User: Update UI with Updated Cost Data
```
## Reporting Pipeline and UI Integration ## Reporting Pipeline and UI Integration
@@ -31,3 +202,87 @@ Describe runtime aspects: request flows, lifecycle of key interactions, and runt
- `templates/Dashboard.html` posts the user-provided dataset to the summary endpoint, renders metric cards for each field, and charts the distribution using Chart.js. - `templates/Dashboard.html` posts the user-provided dataset to the summary endpoint, renders metric cards for each field, and charts the distribution using Chart.js.
- `SUMMARY_FIELDS` now includes variance, 5th/10th/90th/95th percentiles, and tail-risk metrics (VaR/Expected Shortfall at 95%); tooltip annotations surface the tail metrics alongside the percentile line chart. - `SUMMARY_FIELDS` now includes variance, 5th/10th/90th/95th percentiles, and tail-risk metrics (VaR/Expected Shortfall at 95%); tooltip annotations surface the tail metrics alongside the percentile line chart.
- Error handling surfaces HTTP failures inline so users can address malformed JSON or backend availability issues without leaving the page. - Error handling surfaces HTTP failures inline so users can address malformed JSON or backend availability issues without leaving the page.
Reporting pipeline diagram:
```mermaid
sequenceDiagram
participant User
participant Frontend
participant API
participant ReportingService
User->>Frontend: Input Data for Reporting
Frontend->>API: POST /api/reporting/summary
API->>ReportingService: Route to Handler
ReportingService->>ReportingService: Validate Payload
ReportingService->>ReportingService: Compute Statistics
ReportingService-->>API: Return Report Summary
API-->>Frontend: Send Report Summary
Frontend-->>User: Render Report Metrics and Charts
```
## Monte Carlo Simulation Execution
1. User initiates a Monte Carlo simulation via the UI.
2. The frontend sends a POST request to `/api/simulations/run` with simulation parameters.
3. The `SimulationService` executes the Monte Carlo logic, generating stochastic results.
4. The results are temporarily stored and returned to the frontend.
5. The UI displays the simulation results and allows users to trigger reporting based on these outcomes.
6. The reporting pipeline processes the simulation results as described above.
7. Error handling ensures that any issues during simulation execution are communicated back to the user.
8. Monte Carlo simulation diagram:
```mermaid
sequenceDiagram
participant User
participant Frontend
participant API
participant SimulationService
User->>Frontend: Input Simulation Parameters
Frontend->>API: POST /api/simulations/run
API->>SimulationService: Route to Handler
SimulationService->>SimulationService: Execute Monte Carlo Logic
SimulationService-->>API: Return Simulation Results
API-->>Frontend: Send Simulation Results
Frontend-->>User: Render Simulation Results
```
## Error Handling
Throughout the runtime processes, error handling mechanisms are implemented to catch exceptions and provide meaningful feedback to users. Common error scenarios include:
- Invalid input data
- Database connection issues
- Simulation execution errors
- Reporting calculation failures
- API endpoint unavailability
- Timeouts during long-running operations
- Unauthorized access attempts
- Data validation failures
- Resource not found errors
Error handling diagram:
```mermaid
sequenceDiagram
participant User
participant Frontend
participant API
participant Service
User->>Frontend: Perform Action
Frontend->>API: Send Request
API->>Service: Route to Handler
Service->>Service: Process Request
alt Success
Service-->>API: Return Data
API-->>Frontend: Send Response
Frontend-->>User: Update UI
else Error
Service-->>API: Return Error
API-->>Frontend: Send Error Response
Frontend-->>User: Display Error Message
end
```

View File

@@ -1,10 +1,88 @@
---
title: "07 — Deployment View"
description: "Describe deployment topology, infrastructure components, and environments (dev/stage/prod)."
status: draft
---
<!-- markdownlint-disable-next-line MD025 -->
# 07 — Deployment View # 07 — Deployment View
Status: skeleton ## Deployment Topology
Describe deployment topology, infrastructure components, and environments (dev/stage/prod). The CalMiner application is deployed using a multi-tier architecture consisting of the following layers:
1. **Client Layer**: This layer consists of web browsers that interact with the application through a user interface rendered by Jinja2 templates and enhanced with JavaScript (Chart.js for dashboards).
2. **Web Application Layer**: This layer hosts the FastAPI application, which handles API requests, business logic, and serves HTML templates. It communicates with the database layer for data persistence.
3. **Database Layer**: This layer consists of a PostgreSQL database that stores all application data, including scenarios, parameters, costs, consumption, production outputs, equipment, maintenance logs, and simulation results.
4. **Caching Layer**: This layer uses Redis to cache frequently accessed data and improve application performance.
## Infrastructure Components
The infrastructure components for the application include:
- **Web Server**: Hosts the FastAPI application and serves API endpoints.
- **Database Server**: PostgreSQL database for persisting application data.
- **Static File Server**: Serves static assets such as CSS, JavaScript, and image files.
- **Reverse Proxy (optional)**: An Nginx or Apache server can be used as a reverse proxy.
- **Containerization**: Docker images are generated via the repository `Dockerfile`, using a multi-stage build to keep the final runtime minimal.
- **CI/CD Pipeline**: Automated pipelines (Gitea Actions) run tests, build/push Docker images, and trigger deployments.
- **Cloud Infrastructure (optional)**: The application can be deployed on cloud platforms.
## Environments
The application can be deployed in multiple environments to support development, testing, and production:
### Development Environment
The development environment is set up for local development and testing. It includes:
- Local PostgreSQL instance (docker compose recommended, script available at `docker-compose.postgres.yml`)
- FastAPI server running in debug mode
### Testing Environment
The testing environment is set up for automated testing and quality assurance. It includes:
- Staging PostgreSQL instance
- FastAPI server running in testing mode
- Automated test suite (e.g., pytest) for running unit and integration tests
### Production Environment
The production environment is set up for serving live traffic and includes:
- Production PostgreSQL instance
- FastAPI server running in production mode
- Load balancer (e.g., Nginx) for distributing incoming requests
- Monitoring and logging tools for tracking application performance
## Containerized Deployment Flow
The Docker-based deployment path aligns with the solution strategy documented in [04 — Solution Strategy](04_solution_strategy.md) and the CI practices captured in [14 — Testing & CI](14_testing_ci.md).
### Image Build
- The multi-stage `Dockerfile` installs dependencies in a builder layer (including system compilers and Python packages) and copies only the required runtime artifacts to the final image.
- Build arguments are minimal; database configuration is supplied at runtime via granular variables (`DATABASE_DRIVER`, `DATABASE_HOST`, `DATABASE_PORT`, `DATABASE_USER`, `DATABASE_PASSWORD`, `DATABASE_NAME`, optional `DATABASE_SCHEMA`). Secrets and configuration should be passed via environment variables or an orchestrator.
- The resulting image exposes port `8000` and starts `uvicorn main:app` (s. [README.md](../../README.md)).
### Runtime Environment
- For single-node deployments, run the container alongside PostgreSQL/Redis using Docker Compose or an equivalent orchestrator.
- A reverse proxy (e.g., Nginx) terminates TLS and forwards traffic to the container on port `8000`.
- Migrations must be applied prior to rolling out a new image; automation can hook into the deploy step to run `scripts/run_migrations.py`.
### CI/CD Integration
- Gitea Actions workflows reside under `.gitea/workflows/`.
- `test.yml` executes the pytest suite using cached pip dependencies.
- `build-and-push.yml` logs into the container registry, rebuilds the Docker image using GitHub Actions cache-backed layers, and pushes `latest` (and additional tags as required).
- `deploy.yml` connects to the target host via SSH, pulls the pushed tag, stops any existing container, and launches the new version.
- Required secrets: `REGISTRY_URL`, `REGISTRY_USERNAME`, `REGISTRY_PASSWORD`, `SSH_HOST`, `SSH_USERNAME`, `SSH_PRIVATE_KEY`.
- Extend these workflows when introducing staging/blue-green deployments; keep cross-links with [14 — Testing & CI](14_testing_ci.md) up to date.
## Integrations and Future Work (deployment-related) ## Integrations and Future Work (deployment-related)
- **Persistence of results**: `/api/simulations/run` currently returns in-memory results; next iteration should persist to `simulation_result` and reference scenarios. - **Persistence of results**: `/api/simulations/run` currently returns in-memory results; next iteration should persist to `simulation_result` and reference scenarios.
- **Deployment**: documentation focuses on local development; containerization and CI/CD pipelines remain to be defined. Consider Docker + GitHub Actions or a simple Docker Compose for local stacks. - **Deployment**: implement infrastructure-as-code (e.g., Terraform/Ansible) to provision the hosting environment and maintain parity across dev/stage/prod.

View File

@@ -1,8 +1,50 @@
---
title: "08 — Concepts"
description: "Document key concepts, domain models, and terminology used throughout the architecture documentation."
status: draft
---
# 08 — Concepts # 08 — Concepts
Status: skeleton ## Key Concepts
Document key concepts, domain models, and terminology used throughout the architecture documentation. ### Scenario
A `scenario` represents a distinct mining project configuration, encapsulating all relevant parameters, costs, consumption, production outputs, equipment, maintenance logs, and simulation results. Each scenario is independent, allowing users to model and analyze different mining strategies.
### Parameterization
Parameters are defined for each scenario to capture inputs such as resource consumption rates, production targets, cost factors, and equipment specifications. Parameters can have fixed values or be linked to probability distributions for stochastic simulations.
### Monte Carlo Simulation
The Monte Carlo simulation engine allows users to perform risk analysis by running multiple iterations of a scenario with varying input parameters based on defined probability distributions. This helps in understanding the range of possible outcomes and their associated probabilities.
## Domain Model
The domain model consists of the following key entities:
- `Scenario`: Represents a mining project configuration.
- `Parameter`: Input values for scenarios, which can be fixed or probabilistic.
- `Cost`: Tracks capital and operational expenditures.
- `Consumption`: Records resource usage.
- `ProductionOutput`: Captures production metrics.
- `Equipment`: Represents mining equipment associated with a scenario.
- `Maintenance`: Logs maintenance events for equipment.
- `SimulationResult`: Stores results from Monte Carlo simulations.
- `Distribution`: Defines probability distributions for stochastic parameters.
- `User`: Represents application users and their roles.
- `Report`: Generated reports summarizing scenario analyses.
- `Dashboard`: Visual representation of key performance indicators and metrics.
- `AuditLog`: Tracks changes and actions performed within the application.
- `Notification`: Alerts and messages related to scenario events and updates.
- `Tag`: Labels for categorizing scenarios and other entities.
- `Attachment`: Files associated with scenarios, such as documents or images.
- `Version`: Tracks different versions of scenarios and their configurations.
### Detailed Domain Models
See [Domain Models](08_concepts/08_01_domain_models.md) document for detailed class diagrams and entity relationships.
## Data Model Highlights ## Data Model Highlights
@@ -15,3 +57,7 @@ Document key concepts, domain models, and terminology used throughout the archit
- `simulation_result`: staging table for future Monte Carlo outputs (not yet populated by `run_simulation`). - `simulation_result`: staging table for future Monte Carlo outputs (not yet populated by `run_simulation`).
Foreign keys secure referential integrity between domain tables and their scenarios, enabling per-scenario analytics. Foreign keys secure referential integrity between domain tables and their scenarios, enabling per-scenario analytics.
### Detailed Data Models
See [Data Models](08_concepts/08_02_data_models.md) document for detailed ER diagrams and table descriptions.

View File

@@ -0,0 +1,106 @@
# Data Models
## Data Model Highlights
- `scenario`: central entity describing a mining scenario; owns relationships to cost, consumption, production, equipment, and maintenance tables.
- `capex`, `opex`: monetary tracking linked to scenarios.
- `consumption`: resource usage entries parameterized by scenario and description.
- `parameter`: scenario inputs with base `value` and optional distribution linkage via `distribution_id`, `distribution_type`, and JSON `distribution_parameters` to support simulation sampling.
- `production_output`: production metrics per scenario.
- `equipment` and `maintenance`: equipment inventory and maintenance events with dates/costs.
- `simulation_result`: staging table for future Monte Carlo outputs (not yet populated by `run_simulation`).
Foreign keys secure referential integrity between domain tables and their scenarios, enabling per-scenario analytics.
## Schema Diagrams
```mermaid
erDiagram
SCENARIO ||--o{ CAPEX : has
SCENARIO ||--o{ OPEX : has
SCENARIO ||--o{ CONSUMPTION : has
SCENARIO ||--o{ PARAMETER : has
SCENARIO ||--o{ PRODUCTION_OUTPUT : has
SCENARIO ||--o{ EQUIPMENT : has
EQUIPMENT ||--o{ MAINTENANCE : has
SCENARIO ||--o{ SIMULATION_RESULT : has
SCENARIO {
int id PK
string name
string description
datetime created_at
datetime updated_at
}
CAPEX {
int id PK
int scenario_id FK
float amount
string description
datetime created_at
datetime updated_at
}
OPEX {
int id PK
int scenario_id FK
float amount
string description
datetime created_at
datetime updated_at
}
CONSUMPTION {
int id PK
int scenario_id FK
string resource_type
float quantity
string description
datetime created_at
datetime updated_at
}
PRODUCTION_OUTPUT {
int id PK
int scenario_id FK
float tonnage
float recovery_rate
float revenue
datetime created_at
datetime updated_at
}
EQUIPMENT {
int id PK
int scenario_id FK
string name
string type
datetime created_at
datetime updated_at
}
MAINTENANCE {
int id PK
int equipment_id FK
date maintenance_date
float cost
string description
datetime created_at
datetime updated_at
}
SIMULATION_RESULT {
int id PK
int scenario_id FK
json result_data
datetime created_at
datetime updated_at
}
PARAMETER {
int id PK
int scenario_id FK
string name
float value
int distribution_id FK
string distribution_type
json distribution_parameters
datetime created_at
datetime updated_at
}
```

View File

@@ -20,9 +20,12 @@ CalMiner uses a combination of unit, integration, and end-to-end tests to ensure
### CI/CD ### CI/CD
- Use GitHub Actions for CI. - Use Gitea Actions for CI/CD; workflows live under `.gitea/workflows/`.
- Run tests on pull requests. - `test.yml` runs on every push, provisions a temporary Postgres 16 service, waits for readiness, executes the setup script in dry-run and live modes, installs Playwright browsers, and finally runs the full pytest suite.
- Code coverage target: 80% (using pytest-cov). - `build-and-push.yml` builds the Docker image with `docker/build-push-action@v2`, reusing GitHub Actions cache-backed layers, and pushes to the Gitea registry.
- `deploy.yml` connects to the target host (via `appleboy/ssh-action`) to pull the freshly pushed image and restart the container.
- Mandatory secrets: `REGISTRY_USERNAME`, `REGISTRY_PASSWORD`, `REGISTRY_URL`, `SSH_HOST`, `SSH_USERNAME`, `SSH_PRIVATE_KEY`.
- Run tests on pull requests to shared branches; enforce coverage target ≥80% (pytest-cov).
### Running Tests ### Running Tests
@@ -34,7 +37,7 @@ CalMiner uses a combination of unit, integration, and end-to-end tests to ensure
Organize tests under the `tests/` directory mirroring the application structure: Organize tests under the `tests/` directory mirroring the application structure:
```text ````text
tests/ tests/
unit/ unit/
test_<module>.py test_<module>.py
@@ -71,7 +74,7 @@ To run the Playwright tests:
```bash ```bash
pytest tests/e2e/ pytest tests/e2e/
``` ````
To run headed mode: To run headed mode:
@@ -93,9 +96,23 @@ pytest tests/e2e/ --headed
### CI Integration ### CI Integration
- Configure GitHub Actions workflow in `.github/workflows/ci.yml` to: `test.yml` encapsulates the steps below:
- Install dependencies, including Playwright browsers (`playwright install`).
- Run `pytest` with coverage for unit tests. - Check out the repository and set up Python 3.10.
- Run `pytest tests/e2e/` for E2E tests. - Configure the runner's apt proxy (if available), install project dependencies (requirements + test extras), and download Playwright browsers.
- Fail on coverage <80%. - Run `pytest` (extend with `--cov` flags when enforcing coverage).
- Upload coverage artifacts under `reports/coverage/`.
> The pip cache step is temporarily disabled in `test.yml` until the self-hosted cache service is exposed (see `docs/ci-cache-troubleshooting.md`).
`build-and-push.yml` adds:
- Registry login using repository secrets.
- Docker image build/push with GHA cache storage (`cache-from/cache-to` set to `type=gha`).
`deploy.yml` handles:
- SSH into the deployment host.
- Pull the tagged image from the registry.
- Stop, remove, and relaunch the `calminer` container exposing port 8000.
When adding new workflows, mirror this structure to ensure secrets, caching, and deployment steps remain aligned with the production environment.

View File

@@ -10,7 +10,7 @@ This document outlines the local development environment and steps to get the pr
## Clone and Project Setup ## Clone and Project Setup
```powershell ````powershell
# Clone the repository # Clone the repository
git clone https://git.allucanget.biz/allucanget/calminer.git git clone https://git.allucanget.biz/allucanget/calminer.git
cd calminer cd calminer
@@ -36,28 +36,34 @@ pip install -r requirements.txt
```sql ```sql
CREATE USER calminer_user WITH PASSWORD 'your_password'; CREATE USER calminer_user WITH PASSWORD 'your_password';
``` ````
1. Create database: 1. Create database:
```sql ````sql
CREATE DATABASE calminer; CREATE DATABASE calminer;
```python ```python
## Environment Variables ## Environment Variables
1. Copy `.env.example` to `.env` at project root. 1. Copy `.env.example` to `.env` at project root.
1. Edit `.env` to set database connection string: 1. Edit `.env` to set database connection details:
```dotenv ```dotenv
DATABASE_URL=postgresql://<user>:<password>@localhost:5432/calminer DATABASE_DRIVER=postgresql
``` DATABASE_HOST=localhost
DATABASE_PORT=5432
DATABASE_USER=calminer_user
DATABASE_PASSWORD=your_password
DATABASE_NAME=calminer
DATABASE_SCHEMA=public
````
1. The application uses `python-dotenv` to load these variables. 1. The application uses `python-dotenv` to load these variables. A legacy `DATABASE_URL` value is still accepted if the granular keys are omitted.
## Running the Application ## Running the Application
```powershell ````powershell
# Start the FastAPI server # Start the FastAPI server
uvicorn main:app --reload uvicorn main:app --reload
```python ```python
@@ -66,6 +72,6 @@ uvicorn main:app --reload
```powershell ```powershell
pytest pytest
``` ````
E2E tests use Playwright and a session-scoped `live_server` fixture that starts the app at `http://localhost:8001` for browser-driven tests. E2E tests use Playwright and a session-scoped `live_server` fixture that starts the app at `http://localhost:8001` for browser-driven tests.

View File

@@ -7,7 +7,7 @@ description: "arc42-based architecture documentation for the CalMiner project"
This folder mirrors the arc42 chapter structure (adapted to Markdown). This folder mirrors the arc42 chapter structure (adapted to Markdown).
Files: ## Files
- [01 Introduction and Goals](01_introduction_and_goals.md) - [01 Introduction and Goals](01_introduction_and_goals.md)
- [02 Architecture Constraints](02_architecture_constraints.md) - [02 Architecture Constraints](02_architecture_constraints.md)
@@ -24,5 +24,3 @@ Files:
- [13 UI and Style](13_ui_and_style.md) - [13 UI and Style](13_ui_and_style.md)
- [14 Testing & CI](14_testing_ci.md) - [14 Testing & CI](14_testing_ci.md)
- [15 Development Setup](15_development_setup.md) - [15 Development Setup](15_development_setup.md)
- [Quickstart](docs/quickstart.md) contains developer quickstart, migrations, testing and current status and will remain as a separate quickstart reference.

View File

@@ -0,0 +1,27 @@
# CI Cache Troubleshooting
## Background
The test workflow (`.gitea/workflows/test.yml`) uses the `actions/cache` action to reuse the pip download cache located at `~/.cache/pip`. The cache key now hashes both `requirements.txt` and `requirements-test.txt` so the cache stays aligned with dependency changes.
## Current Observation
Recent CI runs report the following warning when the cache step executes:
```text
::warning::Failed to restore: getCacheEntry failed: connect ETIMEDOUT 172.17.0.5:40181
Cache not found for input keys: Linux-pip-<hash>, Linux-pip-
```
The timeout indicates the runner cannot reach the cache backend rather than a normal cache miss.
## Recommended Follow-Up
- Confirm that the Actions cache service is enabled for the CI environment (Gitea runners require the cache server URL to be provided via `ACTIONS_CACHE_URL` and `ACTIONS_RUNTIME_URL`).
- Verify network connectivity from the runner to the cache service endpoint and ensure required ports are open.
- After connectivity is restored, rerun the workflow to allow the cache to be populated and confirm subsequent runs restore the cache without warnings.
## Interim Guidance
- The workflow will proceed without cached dependencies, but package installs may take longer.
- Keep the cache step in place so it begins working automatically once the infrastructure is configured.

31
docs/idempotency_audit.md Normal file
View File

@@ -0,0 +1,31 @@
# Setup Script Idempotency Audit (2025-10-25)
This note captures the current evaluation of idempotent behaviour for `scripts/setup_database.py` and outlines follow-up actions.
## Admin Tasks
- **ensure_database**: guarded by `SELECT 1 FROM pg_database`; re-runs safely. Failure mode: network issues or lack of privileges surface as psycopg2 errors without additional context.
- **ensure_role**: checks `pg_roles`, creates role if missing, reapplies grants each time. Subsequent runs execute grants again but PostgreSQL tolerates repeated grants.
- **ensure_schema**: uses `information_schema` guard and respects `--dry-run`; idempotent when schema is `public` or already present.
## Application Tasks
- **initialize_schema**: relies on SQLAlchemy `create_all(checkfirst=True)`; repeatable. Dry-run output remains descriptive.
- **run_migrations**: new baseline workflow applies `000_base.sql` once and records legacy scripts as applied. Subsequent runs detect the baseline in `schema_migrations` and skip reapplication.
## Seeding
- `seed_baseline_data` seeds currencies and measurement units with upsert logic. Verification now raises on missing data, preventing silent failures.
- Running `--seed-data` repeatedly performs `ON CONFLICT` updates, making the operation safe.
## Outstanding Risks
1. Baseline migration relies on legacy files being present when first executed; if removed beforehand, old entries are never marked. (Low risk given repository state.)
2. `ensure_database` and `ensure_role` do not wrap SQL execution errors with additional context beyond psycopg2 messages.
3. Baseline verification assumes migrations and seeding run in the same process; manual runs of `scripts/seed_data.py` without the baseline could still fail.
## Recommended Actions
- Add regression tests ensuring repeated executions of key CLI paths (`--run-migrations`, `--seed-data`) result in no-op behaviour after the first run.
- Extend logging/error handling for admin operations to provide clearer messages on repeated failures.
- Consider a preflight check when migrations directory lacks legacy files but baseline is pending, warning about potential drift.

29
docs/logging_audit.md Normal file
View File

@@ -0,0 +1,29 @@
# Setup Script Logging Audit (2025-10-25)
The following observations capture current logging behaviour in `scripts/setup_database.py` and highlight areas requiring improved error handling and messaging.
## Connection Validation
- `validate_admin_connection` and `validate_application_connection` log entry/exit messages and raise `RuntimeError` with context if connection fails. This coverage is sufficient.
- `ensure_database` logs creation states but does not surface connection or SQL exceptions beyond the initial connection acquisition. When the inner `cursor.execute` calls fail, the exceptions bubble without contextual logging.
## Migration Runner
- Lists pending migrations and logs each application attempt.
- When the baseline is pending, the script logs whether it is a dry-run or live application and records legacy file marking. However, if `_apply_migration_file` raises an exception, the caller re-raises after logging the failure; there is no wrapping message guiding users toward manual cleanup.
- Legacy migration marking happens silently (just info logs). Failures during the insert into `schema_migrations` would currently propagate without added guidance.
## Seeding Workflow
- `seed_baseline_data` announces each seeding phase and skips verification in dry-run mode with a log breadcrumb.
- `_verify_seeded_data` warns about missing currencies/units and inactive defaults but does **not** raise errors, meaning CI can pass while the database is incomplete. There is no explicit log when verification succeeds.
- `_seed_units` logs when the `measurement_unit` table is missing, which is helpful, but the warning is the only feedback; no exception is raised.
## Suggested Enhancements
1. Wrap baseline application and legacy marking in `try/except` blocks that log actionable remediation steps before re-raising.
2. Promote seed verification failures (missing or inactive records) to exceptions so automated workflows fail fast; add success logs for clarity.
3. Add contextual logging around currency/measurement-unit insert failures, particularly around `execute_values` calls, to aid debugging malformed data.
4. Introduce structured logging (log codes or phases) for major steps (`CONNECT`, `MIGRATE`, `SEED`, `VERIFY`) to make scanning log files easier.
These findings inform the remaining TODO subtasks for enhanced error handling.

View File

@@ -0,0 +1,53 @@
# Consolidated Migration Baseline Plan
This note outlines the content and structure of the planned baseline migration (`scripts/migrations/000_base.sql`). The objective is to capture the currently required schema changes in a single idempotent script so that fresh environments only need to apply one SQL file before proceeding with incremental migrations.
## Guiding Principles
1. **Idempotent DDL**: Every `CREATE` or `ALTER` statement must tolerate repeated execution. Use `IF NOT EXISTS` guards or existence checks (`information_schema`) where necessary.
2. **Order of Operations**: Create reference tables first, then update dependent tables, finally enforce foreign keys and constraints.
3. **Data Safety**: Default data seeded by migrations should be minimal and in ASCII-only form to avoid encoding issues in various shells and CI logs.
4. **Compatibility**: The baseline must reflect the schema shape expected by the current SQLAlchemy models, API routes, and seeding scripts.
## Schema Elements to Include
### 1. `currency` Table
- Columns: `id SERIAL PRIMARY KEY`, `code VARCHAR(3) UNIQUE NOT NULL`, `name VARCHAR(128) NOT NULL`, `symbol VARCHAR(8)`, `is_active BOOLEAN NOT NULL DEFAULT TRUE`.
- Index: implicit via unique constraint on `code`.
- Seed rows matching `scripts.seed_data.CURRENCY_SEEDS` (ASCII-only symbols such as `USD$`, `CAD$`).
- Upsert logic using `ON CONFLICT (code) DO UPDATE` to keep names/symbols in sync when rerun.
### 2. Currency Integration for CAPEX/OPEX
- Add `currency_id INTEGER` columns with `IF NOT EXISTS` guards.
- Populate `currency_id` from legacy `currency_code` if the column exists.
- Default null `currency_id` values to the USD row, then `ALTER` to `SET NOT NULL`.
- Create `fk_capex_currency` and `fk_opex_currency` constraints with `ON DELETE RESTRICT` semantics.
- Drop legacy `currency_code` column if it exists (safe because new column holds data).
### 3. Measurement Metadata on Consumption/Production
- Ensure `consumption` and `production_output` tables have `unit_name VARCHAR(64)` and `unit_symbol VARCHAR(16)` columns with `IF NOT EXISTS` guards.
### 4. `measurement_unit` Reference Table
- Columns: `id SERIAL PRIMARY KEY`, `code VARCHAR(64) UNIQUE NOT NULL`, `name VARCHAR(128) NOT NULL`, `symbol VARCHAR(16)`, `unit_type VARCHAR(32) NOT NULL`, `is_active BOOLEAN NOT NULL DEFAULT TRUE`, `created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()`, `updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()`.
- Assume a simple trigger to maintain `updated_at` is deferred: automate via application layer later; for now, omit trigger.
- Seed rows matching `MEASUREMENT_UNIT_SEEDS` (ASCII names/symbols). Use `ON CONFLICT (code) DO UPDATE` to keep descriptive fields aligned.
### 5. Transaction Handling
- Wrap the main operations in a single `BEGIN; ... COMMIT;` block.
- Use subtransactions (`DO $$ ... $$;`) only where conditional logic is required (e.g., checking column existence before backfill).
## Migration Tracking Alignment
- Baseline file will be named `000_base.sql`. After execution, insert a row into `schema_migrations` with filename `000_base.sql` to keep the tracking table aligned.
- Existing migrations (`20251021_add_currency_and_unit_fields.sql`, `20251022_create_currency_table_and_fks.sql`) remain for historical reference but will no longer be applied to new environments once the baseline is present.
## Next Steps
1. Draft `000_base.sql` reflecting the steps above.
2. Update `run_migrations` to recognise the baseline file and mark older migrations as applied when the baseline exists.
3. Provide documentation in `docs/quickstart.md` explaining how to reset an environment using the baseline plus seeds.

View File

@@ -22,6 +22,31 @@ pip install -r requirements.txt
uvicorn main:app --reload uvicorn main:app --reload
``` ```
## Docker-based setup
To build and run the application using Docker instead of a local Python environment:
```powershell
# Build the application image (multi-stage build keeps runtime small)
docker build -t calminer:latest .
# Start the container on port 8000
docker run --rm -p 8000:8000 calminer:latest
# Supply environment variables (e.g., Postgres connection)
docker run --rm -p 8000:8000 ^
-e DATABASE_DRIVER="postgresql" ^
-e DATABASE_HOST="db.host" ^
-e DATABASE_PORT="5432" ^
-e DATABASE_USER="calminer" ^
-e DATABASE_PASSWORD="s3cret" ^
-e DATABASE_NAME="calminer" ^
-e DATABASE_SCHEMA="public" ^
calminer:latest
```
If you maintain a Postgres or Redis dependency locally, consider authoring a `docker compose` stack that pairs them with the app container. The Docker image expects the database to be reachable and migrations executed before serving traffic.
## Usage Overview ## Usage Overview
- **API base URL**: `http://localhost:8000/api` - **API base URL**: `http://localhost:8000/api`
@@ -43,22 +68,156 @@ pytest
E2E tests use Playwright and a session-scoped `live_server` fixture that starts the app at `http://localhost:8001` for browser-driven tests. E2E tests use Playwright and a session-scoped `live_server` fixture that starts the app at `http://localhost:8001` for browser-driven tests.
## Migrations & Currency Backfill ## Migrations & Baseline
The project includes a referential `currency` table and migration/backfill tooling to normalize legacy currency fields. A consolidated baseline migration (`scripts/migrations/000_base.sql`) captures all schema changes required for a fresh installation. The script is idempotent: it creates the `currency` and `measurement_unit` reference tables, ensures consumption and production records expose unit metadata, and enforces the foreign keys used by CAPEX and OPEX.
### Run migrations and backfill (development) Configure granular database settings in your PowerShell session before running migrations:
Ensure `DATABASE_URL` is set in your PowerShell session to point at a development Postgres instance.
```powershell ```powershell
$env:DATABASE_URL = 'postgresql://user:pass@host/db' $env:DATABASE_DRIVER = 'postgresql'
python scripts/run_migrations.py $env:DATABASE_HOST = 'localhost'
python scripts/backfill_currency.py --dry-run $env:DATABASE_PORT = '5432'
python scripts/backfill_currency.py --create-missing $env:DATABASE_USER = 'calminer'
$env:DATABASE_PASSWORD = 's3cret'
$env:DATABASE_NAME = 'calminer'
$env:DATABASE_SCHEMA = 'public'
python scripts/setup_database.py --run-migrations --seed-data --dry-run
python scripts/setup_database.py --run-migrations --seed-data
``` ```
Use `--dry-run` first to verify what will change. The dry-run invocation reports which steps would execute without making changes. The live run applies the baseline (if not already recorded in `schema_migrations`) and seeds the reference data relied upon by the UI and API.
> The application still accepts `DATABASE_URL` as a fallback if the granular variables are not set.
## Database bootstrap workflow
Provision or refresh a database instance with `scripts/setup_database.py`. Populate the required environment variables (an example lives at `config/setup_test.env.example`) and run:
```powershell
# Load test credentials (PowerShell)
Get-Content .\config\setup_test.env.example |
ForEach-Object {
if ($_ -and -not $_.StartsWith('#')) {
$name, $value = $_ -split '=', 2
Set-Item -Path Env:$name -Value $value
}
}
# Dry-run to inspect the planned actions
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v
# Execute the full workflow
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v
```
Typical log output confirms:
- Admin and application connections succeed for the supplied credentials.
- Database and role creation are idempotent (`already present` when rerun).
- SQLAlchemy metadata either reports missing tables or `All tables already exist`.
- Migrations list pending files and finish with `Applied N migrations` (a new database reports `Applied 1 migrations` for `000_base.sql`).
After a successful run the target database contains all application tables plus `schema_migrations`, and that table records each applied migration file. New installations only record `000_base.sql`; upgraded environments retain historical entries alongside the baseline.
### Local Postgres via Docker Compose
For local validation without installing Postgres directly, use the provided compose file:
```powershell
docker compose -f docker-compose.postgres.yml up -d
```
#### Summary
1. Start the Postgres container with `docker compose -f docker-compose.postgres.yml up -d`.
2. Export the granular database environment variables (host `127.0.0.1`, port `5433`, database `calminer_local`, user/password `calminer`/`secret`).
3. Run the setup script twice: first with `--dry-run` to preview actions, then without it to apply changes.
4. When finished, stop and optionally remove the container/volume using `docker compose -f docker-compose.postgres.yml down`.
The service exposes Postgres 16 on `localhost:5433` with database `calminer_local` and role `calminer`/`secret`. When the container is running, set the granular environment variables before invoking the setup script:
```powershell
$env:DATABASE_DRIVER = 'postgresql'
$env:DATABASE_HOST = '127.0.0.1'
$env:DATABASE_PORT = '5433'
$env:DATABASE_USER = 'calminer'
$env:DATABASE_PASSWORD = 'secret'
$env:DATABASE_NAME = 'calminer_local'
$env:DATABASE_SCHEMA = 'public'
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v
```
When testing is complete, shut down the container (and optional persistent volume) with:
```powershell
docker compose -f docker-compose.postgres.yml down
docker volume rm calminer_postgres_local_postgres_data # optional cleanup
```
Document successful runs (or issues encountered) in `.github/instructions/DONE.TODO.md` for future reference.
### Seeding reference data
`scripts/seed_data.py` provides targeted control over the baseline datasets when the full setup script is not required:
```powershell
python scripts/seed_data.py --currencies --units --dry-run
python scripts/seed_data.py --currencies --units
```
The seeder upserts the canonical currency catalog (`USD`, `EUR`, `CLP`, `RMB`, `GBP`, `CAD`, `AUD`) using ASCII-safe symbols (`USD$`, `EUR`, etc.) and the measurement units referenced by the UI (`tonnes`, `kilograms`, `pounds`, `liters`, `cubic_meters`, `kilowatt_hours`). The setup script invokes the same seeder when `--seed-data` is provided and verifies the expected rows afterward, warning if any are missing or inactive.
### Rollback guidance
`scripts/setup_database.py` now tracks compensating actions when it creates the database or application role. If a later step fails, the script replays those rollback actions (dropping the newly created database or role and revoking grants) before exiting. Dry runs never register rollback steps and remain read-only.
If the script reports that some rollback steps could not complete—for example because a connection cannot be established—rerun the script with `--dry-run` to confirm the desired end state and then apply the outstanding cleanup manually:
```powershell
python scripts/setup_database.py --ensure-database --ensure-role --dry-run -v
# Manual cleanup examples when automation cannot connect
psql -d postgres -c "DROP DATABASE IF EXISTS calminer"
psql -d postgres -c "DROP ROLE IF EXISTS calminer"
```
After a failure and rollback, rerun the full setup once the environment issues are resolved.
### CI pipeline environment
The `.gitea/workflows/test.yml` job spins up a temporary PostgreSQL 16 container and runs the setup script twice: once with `--dry-run` to validate the plan and again without it to apply migrations and seeds. No external secrets are required; the workflow sets the following environment variables for both invocations and for pytest:
| Variable | Value | Purpose |
| --- | --- | --- |
| `DATABASE_DRIVER` | `postgresql` | Signals the driver to the setup script |
| `DATABASE_HOST` | `postgres` | Hostname of the Postgres job service container |
| `DATABASE_PORT` | `5432` | Default service port |
| `DATABASE_NAME` | `calminer_ci` | Target database created by the workflow |
| `DATABASE_USER` | `calminer` | Application role used during tests |
| `DATABASE_PASSWORD` | `secret` | Password for both admin and app role |
| `DATABASE_SCHEMA` | `public` | Default schema for the tests |
| `DATABASE_SUPERUSER` | `calminer` | Setup script uses the same role for admin actions |
| `DATABASE_SUPERUSER_PASSWORD` | `secret` | Matches the Postgres service password |
| `DATABASE_SUPERUSER_DB` | `calminer_ci` | Database to connect to for admin operations |
The workflow also updates `DATABASE_URL` for pytest to point at the CI Postgres instance. Existing tests continue to work unchanged, since SQLAlchemy reads the URL exactly as it does locally.
Because the workflow provisions everything inline, no repository or organization secrets need to be configured for basic CI runs. If you later move the setup step to staging or production pipelines, replace these inline values with secrets managed by the CI platform. When running on self-hosted runners behind an HTTP proxy or apt cache, ensure Playwright dependencies and OS packages inherit the same proxy settings that the workflow configures prior to installing browsers.
### Staging environment workflow
Use the staging checklist in `docs/staging_environment_setup.md` when running the setup script against the shared environment. A sample variable file (`config/setup_staging.env`) records the expected inputs (host, port, admin/application roles); copy it outside the repository or load the values securely via your shell before executing the workflow.
Recommended execution order:
1. Dry run with `--dry-run -v` to confirm connectivity and review planned operations. Capture the output to `reports/setup_staging_dry_run.log` (or similar) for auditing.
2. Execute the live run with the same flags minus `--dry-run` to provision the database, role grants, migrations, and seed data. Save the log as `reports/setup_staging_apply.log`.
3. Repeat the dry run to verify idempotency and record the result (for example `reports/setup_staging_post_apply.log`).
Record any issues in `.github/instructions/TODO.md` or `.github/instructions/DONE.TODO.md` as appropriate so the team can track follow-up actions.
## Database Objects ## Database Objects
@@ -74,10 +233,10 @@ The database contains tables such as `capex`, `opex`, `chemical_consumption`, `f
## Where to look next ## Where to look next
- Architecture overview & chapters: [architecture](docs/architecture/README.md) (per-chapter files under `docs/architecture/`) - Architecture overview & chapters: [architecture](architecture/README.md) (per-chapter files under `docs/architecture/`)
- [Testing & CI](docs/architecture/14_testing_ci.md) - [Testing & CI](architecture/14_testing_ci.md)
- [Development setup](docs/architecture/15_development_setup.md) - [Development setup](architecture/15_development_setup.md)
- Implementation plan & roadmap: [Solution strategy](docs/architecture/04_solution_strategy_extended.md) - Implementation plan & roadmap: [Solution strategy](architecture/04_solution_strategy.md)
- Routes: [routes](routes/) - Routes: [routes](../routes/)
- Services: [services](services/) - Services: [services](../services/)
- Scripts: [scripts](scripts/) (migrations and backfills) - Scripts: [scripts](../scripts/) (migrations and backfills)

78
docs/seed_data_plan.md Normal file
View File

@@ -0,0 +1,78 @@
# Baseline Seed Data Plan
This document captures the datasets that should be present in a fresh CalMiner installation and the structure required to manage them through `scripts/seed_data.py`.
## Currency Catalog
The `currency` table already exists and is seeded today via `scripts/seed_data.py`. The goal is to keep the canonical list in one place and ensure the default currency (USD) is always active.
| Code | Name | Symbol | Notes |
| ---- | ------------------- | ------ | ---------------------------------------- |
| USD | US Dollar | $ | Default currency (`DEFAULT_CURRENCY_CODE`) |
| EUR | Euro | EUR symbol | |
| CLP | Chilean Peso | $ | |
| RMB | Chinese Yuan | RMB symbol | |
| GBP | British Pound | GBP symbol | |
| CAD | Canadian Dollar | $ | |
| AUD | Australian Dollar | $ | |
Seeding behaviour:
- Upsert by ISO code; keep existing name/symbol when updated manually.
- Ensure `is_active` remains true for USD and defaults to true for new rows.
- Defer to runtime validation in `routes.currencies` for enforcing default behaviour.
## Measurement Units
UI routes (`routes/ui.py`) currently rely on the in-memory `MEASUREMENT_UNITS` list to populate dropdowns for consumption and production forms. To make this configurable and available to the API, introduce a dedicated `measurement_unit` table and seed it.
Proposed schema:
| Column | Type | Notes |
| ------------- | -------------- | ------------------------------------ |
| id | SERIAL / BIGINT | Primary key. |
| code | TEXT | Stable slug (e.g. `tonnes`). Unique. |
| name | TEXT | Display label. |
| symbol | TEXT | Short symbol (nullable). |
| unit_type | TEXT | Category (`mass`, `volume`, `energy`).|
| is_active | BOOLEAN | Default `true` for soft disabling. |
| created_at | TIMESTAMP | Optional `NOW()` default. |
| updated_at | TIMESTAMP | Optional `NOW()` trigger/default. |
Initial seed set (mirrors existing UI list plus type categorisation):
| Code | Name | Symbol | Unit Type |
| --------------- | ---------------- | ------ | --------- |
| tonnes | Tonnes | t | mass |
| kilograms | Kilograms | kg | mass |
| pounds | Pounds | lb | mass |
| liters | Liters | L | volume |
| cubic_meters | Cubic Meters | m3 | volume |
| kilowatt_hours | Kilowatt Hours | kWh | energy |
Seeding behaviour:
- Upsert rows by `code`.
- Preserve `unit_type` and `symbol` unless explicitly changed via administration tooling.
- Continue surfacing unit options to the UI by querying this table instead of the static constant.
## Default Settings
The application expects certain defaults to exist:
- **Default currency**: enforced by `routes.currencies._ensure_default_currency`; ensure seeds keep USD active.
- **Fallback measurement unit**: UI currently auto-selects the first option in the list. Once units move to the database, expose an application setting to choose a fallback (future work tracked under "Application Settings management").
## Seeding Structure Updates
To support the datasets above:
1. Extend `scripts/seed_data.py` with a `SeedDataset` registry so each dataset (currencies, units, future defaults) can declare its loader/upsert function and optional dependencies.
2. Add a `--dataset` CLI selector for targeted seeding while keeping `--all` as the default for `setup_database.py` integrations.
3. Update `scripts/setup_database.py` to:
- Run migration ensuring `measurement_unit` table exists.
- Execute the unit seeder after currencies when `--seed-data` is supplied.
- Verify post-seed counts, logging which dataset was inserted/updated.
4. Adjust UI routes to load measurement units from the database and remove the hard-coded list once the table is available.
This plan aligns with the TODO item for seeding initial data and lays the groundwork for consolidating migrations around a single baseline file that introduces both the schema and seed data in an idempotent manner.

View File

@@ -0,0 +1,101 @@
# Staging Environment Setup
This guide outlines how to provision and validate the CalMiner staging database using `scripts/setup_database.py`. It complements the local and CI-focused instructions in `docs/quickstart.md`.
## Prerequisites
- Network access to the staging infrastructure (VPN or bastion, as required by ops).
- Provisioned PostgreSQL instance with superuser or delegated admin credentials for maintenance.
- Application credentials (role + password) dedicated to CalMiner staging.
- The application repository checked out with Python dependencies installed (`pip install -r requirements.txt`).
- Optional but recommended: a writable directory (for example `reports/`) to capture setup logs.
> Replace the placeholder values in the examples below with the actual host, port, and credential details supplied by ops.
## Environment Configuration
Populate the following environment variables before invoking the setup script. Store them in a secure location such as `config/setup_staging.env` (excluded from source control) and load them with `dotenv` or your shell profile.
| Variable | Description |
| --- | --- |
| `DATABASE_HOST` | Staging PostgreSQL hostname or IP (for example `staging-db.internal`). |
| `DATABASE_PORT` | Port exposed by the staging PostgreSQL service (default `5432`). |
| `DATABASE_NAME` | CalMiner staging database name (for example `calminer_staging`). |
| `DATABASE_USER` | Application role used by the FastAPI app (for example `calminer_app`). |
| `DATABASE_PASSWORD` | Password for the application role. |
| `DATABASE_SCHEMA` | Optional non-public schema; omit or set to `public` otherwise. |
| `DATABASE_SUPERUSER` | Administrative role with rights to create roles/databases (for example `calminer_admin`). |
| `DATABASE_SUPERUSER_PASSWORD` | Password for the administrative role. |
| `DATABASE_SUPERUSER_DB` | Database to connect to for admin tasks (default `postgres`). |
| `DATABASE_ADMIN_URL` | Optional DSN that overrides the granular admin settings above. |
You may also set `DATABASE_URL` for application runtime convenience, but the setup script only requires the values listed in the table.
### Loading Variables (PowerShell example)
```powershell
$env:DATABASE_HOST = "staging-db.internal"
$env:DATABASE_PORT = "5432"
$env:DATABASE_NAME = "calminer_staging"
$env:DATABASE_USER = "calminer_app"
$env:DATABASE_PASSWORD = "<app-password>"
$env:DATABASE_SUPERUSER = "calminer_admin"
$env:DATABASE_SUPERUSER_PASSWORD = "<admin-password>"
$env:DATABASE_SUPERUSER_DB = "postgres"
```
For bash shells, export the same variables using `export VARIABLE=value` or load them through `dotenv`.
## Setup Workflow
Run the setup script in three phases to validate idempotency and capture diagnostics:
1. **Dry run (diagnostic):**
```powershell
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v `
2>&1 | Tee-Object -FilePath reports/setup_staging_dry_run.log
```
Confirm that the script reports planned actions without failures. If the application role is missing, a dry run will log skip messages until a live run creates the role.
2. **Apply changes:**
```powershell
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v `
2>&1 | Tee-Object -FilePath reports/setup_staging_apply.log
```
Verify the log for successful database creation, role grants, migration execution, and seed verification.
3. **Post-apply dry run:**
```powershell
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v `
2>&1 | Tee-Object -FilePath reports/setup_staging_post_apply.log
```
This run should confirm that all schema objects, migrations, and seed data are already in place.
## Validation Checklist
- [ ] Confirm the staging application can connect using the application DSN (for example, run `pytest tests/e2e/test_smoke.py` against staging or trigger a smoke test workflow).
- [ ] Inspect `schema_migrations` to ensure the baseline migration (`000_base.sql`) is recorded.
- [ ] Spot-check seeded reference data (`currency`, `measurement_unit`) for correctness.
- [ ] Capture and archive the three setup logs in a shared location for audit purposes.
## Troubleshooting
- If the dry run reports skipped actions because the application role does not exist, proceed with the live run; subsequent dry runs will validate as expected.
- Connection errors usually stem from network restrictions or incorrect credentials. Validate reachability with `psql` or `pg_isready` using the same host/port and credentials.
- For permission issues during migrations or seeding, confirm the admin role has rights on the target database and that the application role inherits the expected privileges.
## Rollback Guidance
- Database creation and role grants register rollback actions when not running in dry-run mode. If a later step fails, rerun the script without `--dry-run`; it will automatically revoke grants or drop newly created resources as part of the rollback routine.
- For staged environments where manual intervention is required, coordinate with ops before dropping databases or roles.
## Next Steps
- Keep this document updated as staging infrastructure evolves (for example, when migrating to managed services or rotating credentials).
- Once staging validation is complete, summarize the outcome in `.github/instructions/DONE.TODO.md` and cross-link the relevant log files.

View File

@@ -13,6 +13,7 @@ from routes.consumption import router as consumption_router
from routes.production import router as production_router from routes.production import router as production_router
from routes.equipment import router as equipment_router from routes.equipment import router as equipment_router
from routes.reporting import router as reporting_router from routes.reporting import router as reporting_router
from routes.currencies import router as currencies_router
from routes.simulations import router as simulations_router from routes.simulations import router as simulations_router
from routes.maintenance import router as maintenance_router from routes.maintenance import router as maintenance_router
@@ -41,4 +42,5 @@ app.include_router(production_router)
app.include_router(equipment_router) app.include_router(equipment_router)
app.include_router(maintenance_router) app.include_router(maintenance_router)
app.include_router(reporting_router) app.include_router(reporting_router)
app.include_router(currencies_router)
app.include_router(ui_router) app.include_router(ui_router)

5
requirements-test.txt Normal file
View File

@@ -0,0 +1,5 @@
pytest
pytest-cov
pytest-httpx
playwright
pytest-playwright

View File

@@ -7,6 +7,3 @@ httpx
jinja2 jinja2
pandas pandas
numpy numpy
pytest
pytest-cov
pytest-httpx

View File

@@ -1,7 +1,9 @@
from typing import List, Dict, Any from typing import Dict, List, Optional
from fastapi import APIRouter, Depends from fastapi import APIRouter, Depends, HTTPException, Query, status
from pydantic import BaseModel, ConfigDict, Field, field_validator
from sqlalchemy.orm import Session from sqlalchemy.orm import Session
from sqlalchemy.exc import IntegrityError
from models.currency import Currency from models.currency import Currency
from routes.dependencies import get_db from routes.dependencies import get_db
@@ -9,9 +11,181 @@ from routes.dependencies import get_db
router = APIRouter(prefix="/api/currencies", tags=["Currencies"]) router = APIRouter(prefix="/api/currencies", tags=["Currencies"])
@router.get("/", response_model=List[Dict[str, Any]]) DEFAULT_CURRENCY_CODE = "USD"
def list_currencies(db: Session = Depends(get_db)): DEFAULT_CURRENCY_NAME = "US Dollar"
results = [] DEFAULT_CURRENCY_SYMBOL = "$"
for c in db.query(Currency).filter_by(is_active=True).order_by(Currency.code).all():
results.append({"id": c.code, "name": f"{c.name} ({c.code})", "symbol": c.symbol})
return results class CurrencyBase(BaseModel):
name: str = Field(..., min_length=1, max_length=128)
symbol: Optional[str] = Field(default=None, max_length=8)
@staticmethod
def _normalize_symbol(value: Optional[str]) -> Optional[str]:
if value is None:
return None
value = value.strip()
return value or None
@field_validator("name")
@classmethod
def _strip_name(cls, value: str) -> str:
return value.strip()
@field_validator("symbol")
@classmethod
def _strip_symbol(cls, value: Optional[str]) -> Optional[str]:
return cls._normalize_symbol(value)
class CurrencyCreate(CurrencyBase):
code: str = Field(..., min_length=3, max_length=3)
is_active: bool = True
@field_validator("code")
@classmethod
def _normalize_code(cls, value: str) -> str:
return value.strip().upper()
class CurrencyUpdate(CurrencyBase):
is_active: Optional[bool] = None
class CurrencyActivation(BaseModel):
is_active: bool
class CurrencyRead(CurrencyBase):
id: int
code: str
is_active: bool
model_config = ConfigDict(from_attributes=True)
def _ensure_default_currency(db: Session) -> Currency:
existing = (
db.query(Currency)
.filter(Currency.code == DEFAULT_CURRENCY_CODE)
.one_or_none()
)
if existing:
return existing
default_currency = Currency(
code=DEFAULT_CURRENCY_CODE,
name=DEFAULT_CURRENCY_NAME,
symbol=DEFAULT_CURRENCY_SYMBOL,
is_active=True,
)
db.add(default_currency)
try:
db.commit()
except IntegrityError:
db.rollback()
existing = (
db.query(Currency)
.filter(Currency.code == DEFAULT_CURRENCY_CODE)
.one()
)
return existing
db.refresh(default_currency)
return default_currency
def _get_currency_or_404(db: Session, code: str) -> Currency:
normalized = code.strip().upper()
currency = (
db.query(Currency)
.filter(Currency.code == normalized)
.one_or_none()
)
if currency is None:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail="Currency not found")
return currency
@router.get("/", response_model=List[CurrencyRead])
def list_currencies(
include_inactive: bool = Query(
False, description="Include inactive currencies"),
db: Session = Depends(get_db),
):
_ensure_default_currency(db)
query = db.query(Currency)
if not include_inactive:
query = query.filter(Currency.is_active.is_(True))
currencies = query.order_by(Currency.code).all()
return currencies
@router.post("/", response_model=CurrencyRead, status_code=status.HTTP_201_CREATED)
def create_currency(payload: CurrencyCreate, db: Session = Depends(get_db)):
code = payload.code
existing = (
db.query(Currency)
.filter(Currency.code == code)
.one_or_none()
)
if existing is not None:
raise HTTPException(
status_code=status.HTTP_409_CONFLICT,
detail=f"Currency '{code}' already exists",
)
currency = Currency(
code=code,
name=payload.name,
symbol=CurrencyBase._normalize_symbol(payload.symbol),
is_active=payload.is_active,
)
db.add(currency)
db.commit()
db.refresh(currency)
return currency
@router.put("/{code}", response_model=CurrencyRead)
def update_currency(code: str, payload: CurrencyUpdate, db: Session = Depends(get_db)):
currency = _get_currency_or_404(db, code)
if payload.name is not None:
setattr(currency, "name", payload.name)
if payload.symbol is not None or payload.symbol == "":
setattr(
currency,
"symbol",
CurrencyBase._normalize_symbol(payload.symbol),
)
if payload.is_active is not None:
code_value = getattr(currency, "code")
if code_value == DEFAULT_CURRENCY_CODE and payload.is_active is False:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="The default currency cannot be deactivated.",
)
setattr(currency, "is_active", payload.is_active)
db.add(currency)
db.commit()
db.refresh(currency)
return currency
@router.patch("/{code}/activation", response_model=CurrencyRead)
def toggle_currency_activation(code: str, body: CurrencyActivation, db: Session = Depends(get_db)):
currency = _get_currency_or_404(db, code)
code_value = getattr(currency, "code")
if code_value == DEFAULT_CURRENCY_CODE and body.is_active is False:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="The default currency cannot be deactivated.",
)
setattr(currency, "is_active", body.is_active)
db.add(currency)
db.commit()
db.refresh(currency)
return currency

View File

@@ -19,6 +19,7 @@ from models.simulation_result import SimulationResult
from routes.dependencies import get_db from routes.dependencies import get_db
from services.reporting import generate_report from services.reporting import generate_report
from models.currency import Currency from models.currency import Currency
from routes.currencies import DEFAULT_CURRENCY_CODE, _ensure_default_currency
CURRENCY_CHOICES: list[Dict[str, Any]] = [ CURRENCY_CHOICES: list[Dict[str, Any]] = [
@@ -148,9 +149,43 @@ def _load_currencies(db: Session) -> Dict[str, Any]:
for c in db.query(Currency).filter_by(is_active=True).order_by(Currency.code).all(): for c in db.query(Currency).filter_by(is_active=True).order_by(Currency.code).all():
items.append( items.append(
{"id": c.code, "name": f"{c.name} ({c.code})", "symbol": c.symbol}) {"id": c.code, "name": f"{c.name} ({c.code})", "symbol": c.symbol})
if not items:
items.append({"id": "USD", "name": "US Dollar (USD)", "symbol": "$"})
return {"currency_options": items} return {"currency_options": items}
def _load_currency_settings(db: Session) -> Dict[str, Any]:
_ensure_default_currency(db)
records = db.query(Currency).order_by(Currency.code).all()
currencies: list[Dict[str, Any]] = []
for record in records:
code_value = getattr(record, "code")
currencies.append(
{
"id": int(getattr(record, "id")),
"code": code_value,
"name": getattr(record, "name"),
"symbol": getattr(record, "symbol"),
"is_active": bool(getattr(record, "is_active", True)),
"is_default": code_value == DEFAULT_CURRENCY_CODE,
}
)
active_count = sum(1 for item in currencies if item["is_active"])
inactive_count = len(currencies) - active_count
return {
"currencies": currencies,
"currency_stats": {
"total": len(currencies),
"active": active_count,
"inactive": inactive_count,
},
"default_currency_code": DEFAULT_CURRENCY_CODE,
"currency_api_base": "/api/currencies",
}
def _load_consumption(db: Session) -> Dict[str, Any]: def _load_consumption(db: Session) -> Dict[str, Any]:
grouped: defaultdict[int, list[Dict[str, Any]]] = defaultdict(list) grouped: defaultdict[int, list[Dict[str, Any]]] = defaultdict(list)
for record in ( for record in (
@@ -635,3 +670,10 @@ async def simulations_view(request: Request, db: Session = Depends(get_db)):
async def reporting_view(request: Request, db: Session = Depends(get_db)): async def reporting_view(request: Request, db: Session = Depends(get_db)):
"""Render the reporting view with scenario KPI summaries.""" """Render the reporting view with scenario KPI summaries."""
return _render(request, "reporting.html", _load_reporting(db)) return _render(request, "reporting.html", _load_reporting(db))
@router.get("/ui/currencies", response_class=HTMLResponse)
async def currencies_view(request: Request, db: Session = Depends(get_db)):
"""Render the currency administration page with full currency context."""
context = _load_currency_settings(db)
return _render(request, "currencies.html", context)

View File

@@ -6,21 +6,34 @@ Usage:
python scripts/backfill_currency.py --create-missing python scripts/backfill_currency.py --create-missing
This script is intentionally cautious: it defaults to dry-run mode and will refuse to run This script is intentionally cautious: it defaults to dry-run mode and will refuse to run
if DATABASE_URL is not set. It supports creating missing currency rows when `--create-missing` if database connection settings are missing. It supports creating missing currency rows when `--create-missing`
is provided. Always run against a development/staging database first. is provided. Always run against a development/staging database first.
""" """
from __future__ import annotations from __future__ import annotations
import os
import argparse import argparse
import importlib
import sys
from pathlib import Path
from sqlalchemy import text, create_engine from sqlalchemy import text, create_engine
def load_env_dburl() -> str: PROJECT_ROOT = Path(__file__).resolve().parent.parent
db = os.environ.get("DATABASE_URL") if str(PROJECT_ROOT) not in sys.path:
if not db: sys.path.insert(0, str(PROJECT_ROOT))
def load_database_url() -> str:
try:
db_module = importlib.import_module("config.database")
except RuntimeError as exc:
raise RuntimeError( raise RuntimeError(
"DATABASE_URL not set — set it to your dev/staging DB before running this script") "Database configuration missing: set DATABASE_URL or provide granular "
return db "variables (DATABASE_DRIVER, DATABASE_HOST, DATABASE_PORT, DATABASE_USER, "
"DATABASE_PASSWORD, DATABASE_NAME, optional DATABASE_SCHEMA)."
) from exc
return getattr(db_module, "DATABASE_URL")
def backfill(db_url: str, dry_run: bool = True, create_missing: bool = False) -> None: def backfill(db_url: str, dry_run: bool = True, create_missing: bool = False) -> None:
@@ -41,12 +54,12 @@ def backfill(db_url: str, dry_run: bool = True, create_missing: bool = False) ->
# insert and return id # insert and return id
conn.execute(text("INSERT INTO currency (code, name, symbol, is_active) VALUES (:c, :n, NULL, TRUE)"), { conn.execute(text("INSERT INTO currency (code, name, symbol, is_active) VALUES (:c, :n, NULL, TRUE)"), {
"c": code, "n": code}) "c": code, "n": code})
if db_url.startswith('sqlite:'): r2 = conn.execute(text("SELECT id FROM currency WHERE code = :code"), {
r2 = conn.execute(text("SELECT id FROM currency WHERE code = :code"), { "code": code}).fetchone()
"code": code}).fetchone() if not r2:
else: raise RuntimeError(
r2 = conn.execute(text("SELECT id FROM currency WHERE code = :code"), { f"Unable to determine currency ID for '{code}' after insert"
"code": code}).fetchone() )
return r2[0] return r2[0]
return None return None
@@ -95,7 +108,7 @@ def main() -> None:
help="Create missing currency rows in the currency table") help="Create missing currency rows in the currency table")
args = parser.parse_args() args = parser.parse_args()
db = load_env_dburl() db = load_database_url()
backfill(db, dry_run=args.dry_run, create_missing=args.create_missing) backfill(db, dry_run=args.dry_run, create_missing=args.create_missing)

View File

@@ -0,0 +1,142 @@
-- Baseline migration for CalMiner database schema
-- Date: 2025-10-25
-- Purpose: Consolidate foundational tables and reference data
BEGIN;
-- Currency reference table
CREATE TABLE IF NOT EXISTS currency (
id SERIAL PRIMARY KEY,
code VARCHAR(3) NOT NULL UNIQUE,
name VARCHAR(128) NOT NULL,
symbol VARCHAR(8),
is_active BOOLEAN NOT NULL DEFAULT TRUE
);
INSERT INTO currency (code, name, symbol, is_active)
VALUES
('USD', 'United States Dollar', 'USD$', TRUE),
('EUR', 'Euro', 'EUR', TRUE),
('CLP', 'Chilean Peso', 'CLP$', TRUE),
('RMB', 'Chinese Yuan', 'RMB', TRUE),
('GBP', 'British Pound', 'GBP', TRUE),
('CAD', 'Canadian Dollar', 'CAD$', TRUE),
('AUD', 'Australian Dollar', 'AUD$', TRUE)
ON CONFLICT (code) DO UPDATE
SET name = EXCLUDED.name,
symbol = EXCLUDED.symbol,
is_active = EXCLUDED.is_active;
-- Measurement unit reference table
CREATE TABLE IF NOT EXISTS measurement_unit (
id SERIAL PRIMARY KEY,
code VARCHAR(64) NOT NULL UNIQUE,
name VARCHAR(128) NOT NULL,
symbol VARCHAR(16),
unit_type VARCHAR(32) NOT NULL,
is_active BOOLEAN NOT NULL DEFAULT TRUE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
INSERT INTO measurement_unit (code, name, symbol, unit_type, is_active)
VALUES
('tonnes', 'Tonnes', 't', 'mass', TRUE),
('kilograms', 'Kilograms', 'kg', 'mass', TRUE),
('pounds', 'Pounds', 'lb', 'mass', TRUE),
('liters', 'Liters', 'L', 'volume', TRUE),
('cubic_meters', 'Cubic Meters', 'm3', 'volume', TRUE),
('kilowatt_hours', 'Kilowatt Hours', 'kWh', 'energy', TRUE)
ON CONFLICT (code) DO UPDATE
SET name = EXCLUDED.name,
symbol = EXCLUDED.symbol,
unit_type = EXCLUDED.unit_type,
is_active = EXCLUDED.is_active;
-- Consumption and production measurement metadata
ALTER TABLE consumption
ADD COLUMN IF NOT EXISTS unit_name VARCHAR(64);
ALTER TABLE consumption
ADD COLUMN IF NOT EXISTS unit_symbol VARCHAR(16);
ALTER TABLE production_output
ADD COLUMN IF NOT EXISTS unit_name VARCHAR(64);
ALTER TABLE production_output
ADD COLUMN IF NOT EXISTS unit_symbol VARCHAR(16);
-- Currency integration for CAPEX and OPEX
ALTER TABLE capex
ADD COLUMN IF NOT EXISTS currency_id INTEGER;
ALTER TABLE opex
ADD COLUMN IF NOT EXISTS currency_id INTEGER;
DO $$
DECLARE
usd_id INTEGER;
BEGIN
-- Ensure currency_id columns align with legacy currency_code values when present
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'capex' AND column_name = 'currency_code'
) THEN
UPDATE capex AS c
SET currency_id = cur.id
FROM currency AS cur
WHERE c.currency_code = cur.code
AND (c.currency_id IS DISTINCT FROM cur.id);
END IF;
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'opex' AND column_name = 'currency_code'
) THEN
UPDATE opex AS o
SET currency_id = cur.id
FROM currency AS cur
WHERE o.currency_code = cur.code
AND (o.currency_id IS DISTINCT FROM cur.id);
END IF;
SELECT id INTO usd_id FROM currency WHERE code = 'USD';
IF usd_id IS NOT NULL THEN
UPDATE capex SET currency_id = usd_id WHERE currency_id IS NULL;
UPDATE opex SET currency_id = usd_id WHERE currency_id IS NULL;
END IF;
END $$;
ALTER TABLE capex
ALTER COLUMN currency_id SET NOT NULL;
ALTER TABLE opex
ALTER COLUMN currency_id SET NOT NULL;
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM information_schema.table_constraints
WHERE table_schema = current_schema()
AND table_name = 'capex'
AND constraint_name = 'fk_capex_currency'
) THEN
ALTER TABLE capex
ADD CONSTRAINT fk_capex_currency FOREIGN KEY (currency_id)
REFERENCES currency (id) ON DELETE RESTRICT;
END IF;
IF NOT EXISTS (
SELECT 1 FROM information_schema.table_constraints
WHERE table_schema = current_schema()
AND table_name = 'opex'
AND constraint_name = 'fk_opex_currency'
) THEN
ALTER TABLE opex
ADD CONSTRAINT fk_opex_currency FOREIGN KEY (currency_id)
REFERENCES currency (id) ON DELETE RESTRICT;
END IF;
END $$;
ALTER TABLE capex
DROP COLUMN IF EXISTS currency_code;
ALTER TABLE opex
DROP COLUMN IF EXISTS currency_code;
COMMIT;

View File

@@ -1,29 +0,0 @@
-- CalMiner Migration: add currency and unit metadata columns
-- Date: 2025-10-21
-- Purpose: align persisted schema with API changes introducing currency selection for
-- CAPEX/OPEX costs and unit selection for consumption/production records.
BEGIN;
-- CAPEX / OPEX
ALTER TABLE capex
ADD COLUMN currency_code VARCHAR(3) NOT NULL DEFAULT 'USD';
ALTER TABLE opex
ADD COLUMN currency_code VARCHAR(3) NOT NULL DEFAULT 'USD';
-- Consumption tracking
ALTER TABLE consumption
ADD COLUMN unit_name VARCHAR(64);
ALTER TABLE consumption
ADD COLUMN unit_symbol VARCHAR(16);
-- Production output
ALTER TABLE production_output
ADD COLUMN unit_name VARCHAR(64);
ALTER TABLE production_output
ADD COLUMN unit_symbol VARCHAR(16);
COMMIT;

View File

@@ -1,66 +0,0 @@
-- Migration: create currency referential table and convert capex/opex to FK
-- Date: 2025-10-22
BEGIN;
-- 1) Create currency table
CREATE TABLE IF NOT EXISTS currency (
id SERIAL PRIMARY KEY,
code VARCHAR(3) NOT NULL UNIQUE,
name VARCHAR(128) NOT NULL,
symbol VARCHAR(8),
is_active BOOLEAN NOT NULL DEFAULT TRUE
);
-- 2) Seed some common currencies (idempotent)
INSERT INTO currency (code, name, symbol, is_active)
SELECT * FROM (VALUES
('USD','United States Dollar','$',TRUE),
('EUR','Euro','',TRUE),
('CLP','Chilean Peso','CLP$',TRUE),
('RMB','Chinese Yuan','¥',TRUE),
('GBP','British Pound','£',TRUE),
('CAD','Canadian Dollar','C$',TRUE),
('AUD','Australian Dollar','A$',TRUE)
) AS v(code,name,symbol,is_active)
ON CONFLICT (code) DO NOTHING;
-- 3) Add currency_id columns to capex and opex with nullable true to allow backfill
ALTER TABLE capex ADD COLUMN IF NOT EXISTS currency_id INTEGER;
ALTER TABLE opex ADD COLUMN IF NOT EXISTS currency_id INTEGER;
-- 4) Backfill currency_id using existing currency_code column where present
-- Only do this if the currency_code column exists
DO $$
BEGIN
IF EXISTS (SELECT 1 FROM information_schema.columns WHERE table_name='capex' AND column_name='currency_code') THEN
UPDATE capex SET currency_id = (
SELECT id FROM currency WHERE code = capex.currency_code LIMIT 1
);
END IF;
IF EXISTS (SELECT 1 FROM information_schema.columns WHERE table_name='opex' AND column_name='currency_code') THEN
UPDATE opex SET currency_id = (
SELECT id FROM currency WHERE code = opex.currency_code LIMIT 1
);
END IF;
END$$;
-- 5) Make currency_id non-nullable and add FK constraint, default to USD where missing
UPDATE currency SET is_active = TRUE WHERE code = 'USD';
-- Ensure any NULL currency_id uses USD
UPDATE capex SET currency_id = (SELECT id FROM currency WHERE code='USD') WHERE currency_id IS NULL;
UPDATE opex SET currency_id = (SELECT id FROM currency WHERE code='USD') WHERE currency_id IS NULL;
ALTER TABLE capex ALTER COLUMN currency_id SET NOT NULL;
ALTER TABLE opex ALTER COLUMN currency_id SET NOT NULL;
ALTER TABLE capex ADD CONSTRAINT fk_capex_currency FOREIGN KEY (currency_id) REFERENCES currency(id);
ALTER TABLE opex ADD CONSTRAINT fk_opex_currency FOREIGN KEY (currency_id) REFERENCES currency(id);
-- 6) Optionally drop old currency_code columns if they exist
ALTER TABLE capex DROP COLUMN IF EXISTS currency_code;
ALTER TABLE opex DROP COLUMN IF EXISTS currency_code;
COMMIT;

162
scripts/seed_data.py Normal file
View File

@@ -0,0 +1,162 @@
"""Seed baseline data for CalMiner in an idempotent manner.
Usage examples
--------------
```powershell
# Use existing environment variables (or load from setup_test.env.example)
python scripts/seed_data.py --currencies --units --defaults
# Dry-run to preview actions
python scripts/seed_data.py --currencies --dry-run
```
"""
from __future__ import annotations
import argparse
import logging
import os
from typing import Iterable, Optional
import psycopg2
from psycopg2 import errors
from psycopg2.extras import execute_values
from scripts.setup_database import DatabaseConfig
logger = logging.getLogger(__name__)
CURRENCY_SEEDS = (
("USD", "United States Dollar", "USD$", True),
("EUR", "Euro", "EUR", True),
("CLP", "Chilean Peso", "CLP$", True),
("RMB", "Chinese Yuan", "RMB", True),
("GBP", "British Pound", "GBP", True),
("CAD", "Canadian Dollar", "CAD$", True),
("AUD", "Australian Dollar", "AUD$", True),
)
MEASUREMENT_UNIT_SEEDS = (
("tonnes", "Tonnes", "t", "mass", True),
("kilograms", "Kilograms", "kg", "mass", True),
("pounds", "Pounds", "lb", "mass", True),
("liters", "Liters", "L", "volume", True),
("cubic_meters", "Cubic Meters", "m3", "volume", True),
("kilowatt_hours", "Kilowatt Hours", "kWh", "energy", True),
)
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(description="Seed baseline CalMiner data")
parser.add_argument("--currencies", action="store_true", help="Seed currency table")
parser.add_argument("--units", action="store_true", help="Seed unit table")
parser.add_argument("--defaults", action="store_true", help="Seed default records")
parser.add_argument("--dry-run", action="store_true", help="Print actions without executing")
parser.add_argument(
"--verbose", "-v", action="count", default=0, help="Increase logging verbosity"
)
return parser.parse_args()
def _configure_logging(args: argparse.Namespace) -> None:
level = logging.WARNING - (10 * min(args.verbose, 2))
logging.basicConfig(level=max(level, logging.INFO), format="%(levelname)s %(message)s")
def main() -> None:
args = parse_args()
run_with_namespace(args)
def run_with_namespace(
args: argparse.Namespace,
*,
config: Optional[DatabaseConfig] = None,
) -> None:
_configure_logging(args)
if not any((args.currencies, args.units, args.defaults)):
logger.info("No seeding options provided; exiting")
return
config = config or DatabaseConfig.from_env()
with psycopg2.connect(config.application_dsn()) as conn:
conn.autocommit = True
with conn.cursor() as cursor:
if args.currencies:
_seed_currencies(cursor, dry_run=args.dry_run)
if args.units:
_seed_units(cursor, dry_run=args.dry_run)
if args.defaults:
_seed_defaults(cursor, dry_run=args.dry_run)
def _seed_currencies(cursor, *, dry_run: bool) -> None:
logger.info("Seeding currency table (%d rows)", len(CURRENCY_SEEDS))
if dry_run:
for code, name, symbol, active in CURRENCY_SEEDS:
logger.info("Dry run: would upsert currency %s (%s)", code, name)
return
execute_values(
cursor,
"""
INSERT INTO currency (code, name, symbol, is_active)
VALUES %s
ON CONFLICT (code) DO UPDATE
SET name = EXCLUDED.name,
symbol = EXCLUDED.symbol,
is_active = EXCLUDED.is_active
""",
CURRENCY_SEEDS,
)
logger.info("Currency seed complete")
def _seed_units(cursor, *, dry_run: bool) -> None:
total = len(MEASUREMENT_UNIT_SEEDS)
logger.info("Seeding measurement_unit table (%d rows)", total)
if dry_run:
for code, name, symbol, unit_type, _ in MEASUREMENT_UNIT_SEEDS:
logger.info(
"Dry run: would upsert measurement unit %s (%s - %s)",
code,
name,
unit_type,
)
return
try:
execute_values(
cursor,
"""
INSERT INTO measurement_unit (code, name, symbol, unit_type, is_active)
VALUES %s
ON CONFLICT (code) DO UPDATE
SET name = EXCLUDED.name,
symbol = EXCLUDED.symbol,
unit_type = EXCLUDED.unit_type,
is_active = EXCLUDED.is_active
""",
MEASUREMENT_UNIT_SEEDS,
)
except errors.UndefinedTable:
logger.warning(
"measurement_unit table does not exist; skipping unit seeding."
)
cursor.connection.rollback()
return
logger.info("Measurement unit seed complete")
def _seed_defaults(cursor, *, dry_run: bool) -> None:
logger.info("Seeding default records - not yet implemented")
if dry_run:
return
if __name__ == "__main__":
main()

1178
scripts/setup_database.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,17 +1,17 @@
:root { :root {
--color-background: #f4f5f7; --color-background: #f4f5f7;
--color-surface: #ffffff; --color-surface: #ffffff;
--color-text-primary: #1f2933; --color-text-primary: #2a1f33;
--color-text-secondary: #475569; --color-text-secondary: #624769;
--color-text-muted: #64748b; --color-text-muted: #64748b;
--color-text-subtle: #94a3b8; --color-text-subtle: #94a3b8;
--color-text-invert: #ffffff; --color-text-invert: #ffffff;
--color-text-dark: #0f172a; --color-text-dark: #0f172a;
--color-text-strong: #111827; --color-text-strong: #111827;
--color-primary: #0b3d91; --color-primary: #5f320d;
--color-primary-strong: #2563eb; --color-primary-strong: #7e4c13;
--color-primary-stronger: #1d4ed8; --color-primary-stronger: #837c15;
--color-accent: #38bdf8; --color-accent: #bff838;
--color-border: #e2e8f0; --color-border: #e2e8f0;
--color-border-strong: #cbd5e1; --color-border-strong: #cbd5e1;
--color-highlight: #eef2ff; --color-highlight: #eef2ff;
@@ -83,7 +83,7 @@ body {
height: 44px; height: 44px;
border-radius: 12px; border-radius: 12px;
background: linear-gradient( background: linear-gradient(
135deg, 0deg,
var(--color-primary-stronger), var(--color-primary-stronger),
var(--color-accent) var(--color-accent)
); );
@@ -143,6 +143,7 @@ body {
} }
.app-main { .app-main {
background-color: var(--color-background);
display: flex; display: flex;
flex-direction: column; flex-direction: column;
flex: 1; flex: 1;

537
static/js/currencies.js Normal file
View File

@@ -0,0 +1,537 @@
document.addEventListener("DOMContentLoaded", () => {
const dataElement = document.getElementById("currencies-data");
const editorSection = document.getElementById("currencies-editor");
const tableBody = document.getElementById("currencies-table-body");
const tableEmptyState = document.getElementById("currencies-table-empty");
const metrics = {
total: document.getElementById("currency-metric-total"),
active: document.getElementById("currency-metric-active"),
inactive: document.getElementById("currency-metric-inactive"),
};
const form = document.getElementById("currency-form");
const existingSelect = document.getElementById("currency-form-existing");
const codeInput = document.getElementById("currency-form-code");
const nameInput = document.getElementById("currency-form-name");
const symbolInput = document.getElementById("currency-form-symbol");
const statusSelect = document.getElementById("currency-form-status");
const resetButton = document.getElementById("currency-form-reset");
const feedbackElement = document.getElementById("currency-form-feedback");
const saveButton = form ? form.querySelector("button[type='submit']") : null;
const uppercaseCode = (value) =>
(value || "").toString().trim().toUpperCase();
const normalizeSymbol = (value) => {
if (value === undefined || value === null) {
return null;
}
const trimmed = String(value).trim();
return trimmed ? trimmed : null;
};
const normalizeApiBase = (value) => {
if (!value || typeof value !== "string") {
return "/api/currencies";
}
return value.endsWith("/") ? value.slice(0, -1) : value;
};
let currencies = [];
let apiBase = "/api/currencies";
let defaultCurrencyCode = "USD";
const buildCurrencyRecord = (record) => {
if (!record || typeof record !== "object") {
return null;
}
const code = uppercaseCode(record.code);
return {
id: record.id ?? null,
code,
name: record.name || "",
symbol: record.symbol || "",
is_active: Boolean(record.is_active),
is_default: code === defaultCurrencyCode,
};
};
const findCurrencyIndex = (code) => {
return currencies.findIndex((item) => item.code === code);
};
const upsertCurrency = (record) => {
const normalized = buildCurrencyRecord(record);
if (!normalized) {
return null;
}
const existingIndex = findCurrencyIndex(normalized.code);
if (existingIndex >= 0) {
currencies[existingIndex] = normalized;
} else {
currencies.push(normalized);
}
currencies.sort((a, b) => a.code.localeCompare(b.code));
return normalized;
};
const replaceCurrencyList = (records) => {
if (!Array.isArray(records)) {
return;
}
currencies = records
.map((record) => buildCurrencyRecord(record))
.filter((record) => record !== null)
.sort((a, b) => a.code.localeCompare(b.code));
};
const applyPayload = () => {
if (!dataElement) {
return;
}
try {
const parsed = JSON.parse(dataElement.textContent || "{}");
if (parsed && typeof parsed === "object") {
if (parsed.default_currency_code) {
defaultCurrencyCode = uppercaseCode(parsed.default_currency_code);
}
if (parsed.currency_api_base) {
apiBase = normalizeApiBase(parsed.currency_api_base);
}
if (Array.isArray(parsed.currencies)) {
replaceCurrencyList(parsed.currencies);
}
}
} catch (error) {
console.error("Unable to parse currencies payload", error);
}
};
const showFeedback = (message, type = "success") => {
if (!feedbackElement) {
return;
}
feedbackElement.textContent = message;
feedbackElement.classList.remove("hidden", "success", "error");
feedbackElement.classList.add(type);
};
const hideFeedback = () => {
if (!feedbackElement) {
return;
}
feedbackElement.classList.add("hidden");
feedbackElement.classList.remove("success", "error");
feedbackElement.textContent = "";
};
const setButtonLoading = (button, isLoading) => {
if (!button) {
return;
}
button.disabled = isLoading;
button.classList.toggle("is-loading", isLoading);
};
const updateMetrics = () => {
const total = currencies.length;
const active = currencies.filter((item) => item.is_active).length;
const inactive = total - active;
if (metrics.total) {
metrics.total.textContent = String(total);
}
if (metrics.active) {
metrics.active.textContent = String(active);
}
if (metrics.inactive) {
metrics.inactive.textContent = String(inactive);
}
};
const renderExistingOptions = (
selectedCode = existingSelect ? existingSelect.value : ""
) => {
if (!existingSelect) {
return;
}
const placeholder = existingSelect.querySelector("option[value='']");
const placeholderClone = placeholder ? placeholder.cloneNode(true) : null;
existingSelect.innerHTML = "";
if (placeholderClone) {
existingSelect.appendChild(placeholderClone);
}
const fragment = document.createDocumentFragment();
currencies.forEach((currency) => {
const option = document.createElement("option");
option.value = currency.code;
option.textContent = currency.name
? `${currency.name} (${currency.code})`
: currency.code;
if (selectedCode === currency.code) {
option.selected = true;
}
fragment.appendChild(option);
});
existingSelect.appendChild(fragment);
if (
selectedCode &&
!currencies.some((item) => item.code === selectedCode)
) {
existingSelect.value = "";
}
};
const renderTable = () => {
if (!tableBody) {
return;
}
tableBody.innerHTML = "";
if (!currencies.length) {
if (tableEmptyState) {
tableEmptyState.classList.remove("hidden");
}
return;
}
if (tableEmptyState) {
tableEmptyState.classList.add("hidden");
}
const fragment = document.createDocumentFragment();
currencies.forEach((currency) => {
const row = document.createElement("tr");
const codeCell = document.createElement("td");
codeCell.textContent = currency.code;
row.appendChild(codeCell);
const nameCell = document.createElement("td");
nameCell.textContent = currency.name || "—";
row.appendChild(nameCell);
const symbolCell = document.createElement("td");
symbolCell.textContent = currency.symbol || "—";
row.appendChild(symbolCell);
const statusCell = document.createElement("td");
statusCell.textContent = currency.is_active ? "Active" : "Inactive";
if (currency.is_default) {
statusCell.textContent += " (Default)";
}
row.appendChild(statusCell);
const actionsCell = document.createElement("td");
const editButton = document.createElement("button");
editButton.type = "button";
editButton.className = "btn";
editButton.dataset.action = "edit";
editButton.dataset.code = currency.code;
editButton.textContent = "Edit";
editButton.style.marginRight = "0.5rem";
const toggleButton = document.createElement("button");
toggleButton.type = "button";
toggleButton.className = "btn";
toggleButton.dataset.action = "toggle";
toggleButton.dataset.code = currency.code;
toggleButton.textContent = currency.is_active ? "Deactivate" : "Activate";
if (currency.is_default && currency.is_active) {
toggleButton.disabled = true;
toggleButton.title = "The default currency must remain active.";
}
actionsCell.appendChild(editButton);
actionsCell.appendChild(toggleButton);
row.appendChild(actionsCell);
fragment.appendChild(row);
});
tableBody.appendChild(fragment);
};
const refreshUI = (selectedCode) => {
currencies.sort((a, b) => a.code.localeCompare(b.code));
renderTable();
renderExistingOptions(selectedCode);
updateMetrics();
};
const findCurrency = (code) =>
currencies.find((item) => item.code === code) || null;
const setFormForCurrency = (currency) => {
if (!form || !codeInput || !nameInput || !symbolInput || !statusSelect) {
return;
}
if (!currency) {
form.reset();
if (existingSelect) {
existingSelect.value = "";
}
codeInput.readOnly = false;
codeInput.value = "";
nameInput.value = "";
symbolInput.value = "";
statusSelect.disabled = false;
statusSelect.value = "true";
statusSelect.title = "";
return;
}
if (existingSelect) {
existingSelect.value = currency.code;
}
codeInput.readOnly = true;
codeInput.value = currency.code;
nameInput.value = currency.name || "";
symbolInput.value = currency.symbol || "";
statusSelect.value = currency.is_active ? "true" : "false";
if (currency.is_default) {
statusSelect.disabled = true;
statusSelect.value = "true";
statusSelect.title = "The default currency must remain active.";
} else {
statusSelect.disabled = false;
statusSelect.title = "";
}
};
const resetFormState = () => {
setFormForCurrency(null);
};
const parseError = async (response, fallbackMessage) => {
try {
const detail = await response.json();
if (detail && typeof detail === "object" && detail.detail) {
return detail.detail;
}
} catch (error) {
// ignore JSON parse errors
}
return fallbackMessage;
};
const fetchCurrenciesFromApi = async () => {
const url = `${apiBase}/?include_inactive=true`;
try {
const response = await fetch(url);
if (!response.ok) {
return;
}
const list = await response.json();
if (Array.isArray(list)) {
replaceCurrencyList(list);
refreshUI(existingSelect ? existingSelect.value : undefined);
}
} catch (error) {
console.warn("Unable to refresh currency list", error);
}
};
const handleSubmit = async (event) => {
event.preventDefault();
hideFeedback();
if (!form || !codeInput || !nameInput || !statusSelect) {
return;
}
const editingCode = existingSelect
? uppercaseCode(existingSelect.value)
: "";
const codeValue = uppercaseCode(codeInput.value);
const nameValue = (nameInput.value || "").trim();
const symbolValue = normalizeSymbol(symbolInput ? symbolInput.value : "");
const isActive = statusSelect.value !== "false";
if (!nameValue) {
showFeedback("Provide a currency name.", "error");
return;
}
if (!editingCode) {
if (!codeValue || codeValue.length !== 3) {
showFeedback("Provide a three-letter currency code.", "error");
return;
}
}
const payload = editingCode
? {
name: nameValue,
symbol: symbolValue,
is_active: isActive,
}
: {
code: codeValue,
name: nameValue,
symbol: symbolValue,
is_active: isActive,
};
const targetCode = editingCode || codeValue;
const url = editingCode
? `${apiBase}/${encodeURIComponent(editingCode)}`
: `${apiBase}/`;
setButtonLoading(saveButton, true);
try {
const response = await fetch(url, {
method: editingCode ? "PUT" : "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(payload),
});
if (!response.ok) {
const message = await parseError(
response,
editingCode
? "Unable to update the currency."
: "Unable to create the currency."
);
throw new Error(message);
}
const result = await response.json();
const updated = upsertCurrency(result);
defaultCurrencyCode = uppercaseCode(defaultCurrencyCode);
refreshUI(updated ? updated.code : targetCode);
if (editingCode) {
showFeedback("Currency updated successfully.");
if (updated) {
setFormForCurrency(updated);
}
} else {
showFeedback("Currency created successfully.");
resetFormState();
}
} catch (error) {
showFeedback(error.message || "An unexpected error occurred.", "error");
} finally {
setButtonLoading(saveButton, false);
}
};
const handleToggle = async (code, button) => {
const record = findCurrency(code);
if (!record) {
return;
}
hideFeedback();
const nextState = !record.is_active;
const url = `${apiBase}/${encodeURIComponent(code)}/activation`;
setButtonLoading(button, true);
try {
const response = await fetch(url, {
method: "PATCH",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ is_active: nextState }),
});
if (!response.ok) {
const message = await parseError(
response,
nextState
? "Unable to activate the currency."
: "Unable to deactivate the currency."
);
throw new Error(message);
}
const result = await response.json();
const updated = upsertCurrency(result);
refreshUI(updated ? updated.code : code);
if (existingSelect && existingSelect.value === code && updated) {
setFormForCurrency(updated);
}
const actionMessage = nextState
? `Currency ${code} activated.`
: `Currency ${code} deactivated.`;
showFeedback(actionMessage);
} catch (error) {
showFeedback(error.message || "An unexpected error occurred.", "error");
} finally {
setButtonLoading(button, false);
}
};
const handleTableClick = (event) => {
const button = event.target.closest("button[data-action]");
if (!button) {
return;
}
const code = uppercaseCode(button.dataset.code);
const action = button.dataset.action;
if (!code || !action) {
return;
}
if (action === "edit") {
const currency = findCurrency(code);
if (currency) {
setFormForCurrency(currency);
hideFeedback();
if (nameInput) {
nameInput.focus();
}
}
} else if (action === "toggle") {
handleToggle(code, button);
}
};
applyPayload();
if (editorSection && editorSection.dataset.defaultCode) {
defaultCurrencyCode = uppercaseCode(editorSection.dataset.defaultCode);
currencies = currencies.map((record) => {
return record
? {
...record,
is_default: record.code === defaultCurrencyCode,
}
: record;
});
}
apiBase = normalizeApiBase(apiBase);
refreshUI();
if (form) {
form.addEventListener("submit", handleSubmit);
}
if (existingSelect) {
existingSelect.addEventListener("change", (event) => {
const selectedCode = uppercaseCode(event.target.value);
if (!selectedCode) {
hideFeedback();
resetFormState();
return;
}
const currency = findCurrency(selectedCode);
if (currency) {
setFormForCurrency(currency);
hideFeedback();
}
});
}
if (resetButton) {
resetButton.addEventListener("click", (event) => {
event.preventDefault();
hideFeedback();
resetFormState();
});
}
if (codeInput) {
codeInput.addEventListener("input", () => {
const value = uppercaseCode(codeInput.value).slice(0, 3);
codeInput.value = value;
});
}
if (tableBody) {
tableBody.addEventListener("click", handleTableClick);
}
fetchCurrenciesFromApi();
});

View File

@@ -38,7 +38,8 @@ endblock %} {% block content %}
</div> </div>
{% else %} {% else %}
<p class="empty-state"> <p class="empty-state">
No scenarios available. Create a scenario before adding parameters. No scenarios available. Create a <a href="scenarios">scenario</a> before
adding parameters.
</p> </p>
{% endif %} {% endif %}
</section> </section>

View File

@@ -30,11 +30,7 @@ title %}Consumption · CalMiner{% endblock %} {% block content %}
scenario", placeholder_disabled=True ) }} scenario", placeholder_disabled=True ) }}
<label for="consumption-form-unit"> <label for="consumption-form-unit">
Unit Unit
<select <select id="consumption-form-unit" name="unit_name" required>
id="consumption-form-unit"
name="unit_name"
required
>
<option value="" disabled selected>Select unit</option> <option value="" disabled selected>Select unit</option>
{% for unit in unit_options %} {% for unit in unit_options %}
<option value="{{ unit.name }}" data-symbol="{{ unit.symbol }}"> <option value="{{ unit.name }}" data-symbol="{{ unit.symbol }}">
@@ -43,11 +39,7 @@ title %}Consumption · CalMiner{% endblock %} {% block content %}
{% endfor %} {% endfor %}
</select> </select>
</label> </label>
<input <input id="consumption-form-unit-symbol" type="hidden" name="unit_symbol" />
id="consumption-form-unit-symbol"
type="hidden"
name="unit_symbol"
/>
<label for="consumption-form-amount"> <label for="consumption-form-amount">
Amount Amount
<input <input
@@ -71,7 +63,7 @@ title %}Consumption · CalMiner{% endblock %} {% block content %}
</form> </form>
{{ feedback("consumption-feedback") }} {% else %} {{ feedback("consumption-feedback") }} {% else %}
<p class="empty-state"> <p class="empty-state">
Create a scenario before adding consumption records. Create a <a href="scenarios">scenario</a> before adding consumption records.
</p> </p>
{% endif %} {% endif %}
</section> </section>

View File

@@ -56,18 +56,10 @@ title %}Costs · CalMiner{% endblock %} {% block content %}
<form id="capex-form" class="form-grid"> <form id="capex-form" class="form-grid">
{{ select_field( "Scenario", "capex-form-scenario", name="scenario_id", {{ select_field( "Scenario", "capex-form-scenario", name="scenario_id",
options=scenarios, required=True, placeholder="Select a scenario", options=scenarios, required=True, placeholder="Select a scenario",
placeholder_disabled=True ) }} placeholder_disabled=True ) }} {{ select_field( "Currency",
{{ select_field( "capex-form-currency", name="currency_code", options=currency_options,
"Currency", required=True, placeholder="Select currency", placeholder_disabled=True,
"capex-form-currency", value_attr="id", label_attr="name" ) }}
name="currency_code",
options=currency_options,
required=True,
placeholder="Select currency",
placeholder_disabled=True,
value_attr="id",
label_attr="name"
) }}
<label for="capex-form-amount"> <label for="capex-form-amount">
Amount Amount
<input <input
@@ -100,18 +92,10 @@ title %}Costs · CalMiner{% endblock %} {% block content %}
<form id="opex-form" class="form-grid"> <form id="opex-form" class="form-grid">
{{ select_field( "Scenario", "opex-form-scenario", name="scenario_id", {{ select_field( "Scenario", "opex-form-scenario", name="scenario_id",
options=scenarios, required=True, placeholder="Select a scenario", options=scenarios, required=True, placeholder="Select a scenario",
placeholder_disabled=True ) }} placeholder_disabled=True ) }} {{ select_field( "Currency",
{{ select_field( "opex-form-currency", name="currency_code", options=currency_options,
"Currency", required=True, placeholder="Select currency", placeholder_disabled=True,
"opex-form-currency", value_attr="id", label_attr="name" ) }}
name="currency_code",
options=currency_options,
required=True,
placeholder="Select currency",
placeholder_disabled=True,
value_attr="id",
label_attr="name"
) }}
<label for="opex-form-amount"> <label for="opex-form-amount">
Amount Amount
<input <input

131
templates/currencies.html Normal file
View File

@@ -0,0 +1,131 @@
{% extends "base.html" %}
{% from "partials/components.html" import select_field, feedback, empty_state, table_container with context %}
{% block title %}Currencies · CalMiner{% endblock %}
{% block content %}
<section class="panel" id="currencies-overview">
<header class="panel-header">
<div>
<h2>Currency Overview</h2>
<p class="chart-subtitle">
Current availability of currencies for project inputs.
</p>
</div>
</header>
{% if currency_stats %}
<div class="dashboard-metrics-grid">
<article class="metric-card">
<span class="metric-label">Total Currencies</span>
<span class="metric-value" id="currency-metric-total">{{ currency_stats.total }}</span>
</article>
<article class="metric-card">
<span class="metric-label">Active</span>
<span class="metric-value" id="currency-metric-active">{{ currency_stats.active }}</span>
</article>
<article class="metric-card">
<span class="metric-label">Inactive</span>
<span class="metric-value" id="currency-metric-inactive">{{ currency_stats.inactive }}</span>
</article>
</div>
{% else %} {{ empty_state("currencies-overview-empty", "No currency data
available yet.") }} {% endif %} {% call table_container(
"currencies-table-container", aria_label="Configured currencies",
heading="Configured Currencies" ) %}
<thead>
<tr>
<th scope="col">Code</th>
<th scope="col">Name</th>
<th scope="col">Symbol</th>
<th scope="col">Status</th>
<th scope="col">Actions</th>
</tr>
</thead>
<tbody id="currencies-table-body"></tbody>
{% endcall %} {{ empty_state( "currencies-table-empty", "No currencies
configured yet.", hidden=currencies|length > 0 ) }}
</section>
<section
class="panel"
id="currencies-editor"
data-default-code="{{ default_currency_code }}"
>
<header class="panel-header">
<div>
<h2>Manage Currencies</h2>
<p class="chart-subtitle">
Create new currencies or update existing configurations inline.
</p>
</div>
</header>
{% set status_options = [ {"id": "true", "name": "Active"}, {"id": "false",
"name": "Inactive"} ] %}
<form id="currency-form" class="form-grid" novalidate>
{{ select_field( "Currency to update (leave blank for new)",
"currency-form-existing", name="existing_code", options=currencies,
placeholder="Create a new currency", value_attr="code", label_attr="name" )
}}
<label for="currency-form-code">
Currency code
<input
id="currency-form-code"
name="code"
type="text"
maxlength="3"
required
autocomplete="off"
placeholder="e.g. USD"
/>
</label>
<label for="currency-form-name">
Currency name
<input
id="currency-form-name"
name="name"
type="text"
maxlength="128"
required
autocomplete="off"
placeholder="e.g. US Dollar"
/>
</label>
<label for="currency-form-symbol">
Currency symbol (optional)
<input
id="currency-form-symbol"
name="symbol"
type="text"
maxlength="8"
autocomplete="off"
placeholder="$"
/>
</label>
{{ select_field( "Status", "currency-form-status", name="is_active",
options=status_options, include_blank=False ) }}
<div class="button-row">
<button type="submit" class="btn primary">Save Currency</button>
<button type="button" class="btn" id="currency-form-reset">Reset</button>
</div>
</form>
{{ feedback("currency-form-feedback") }}
</section>
{% endblock %} {% block scripts %} {{ super() }}
<script id="currencies-data" type="application/json">
{{ {
"currencies": currencies,
"currency_stats": currency_stats,
"default_currency_code": default_currency_code,
"currency_api_base": currency_api_base
} | tojson }}
</script>
<script src="/static/js/currencies.js"></script>
{% endblock %}

View File

@@ -15,7 +15,9 @@ block content %}
</label> </label>
</div> </div>
{% else %} {% else %}
<p class="empty-state">Create a scenario to view equipment inventory.</p> <p class="empty-state">
Create a <a href="scenarios">scenario</a> to view equipment inventory.
</p>
{% endif %} {% endif %}
<div id="equipment-empty" class="empty-state"> <div id="equipment-empty" class="empty-state">
Choose a scenario to review the equipment list. Choose a scenario to review the equipment list.
@@ -62,7 +64,9 @@ block content %}
</form> </form>
<p id="equipment-feedback" class="feedback hidden" role="status"></p> <p id="equipment-feedback" class="feedback hidden" role="status"></p>
{% else %} {% else %}
<p class="empty-state">Create a scenario before managing equipment.</p> <p class="empty-state">
Create a <a href="scenarios">scenario</a> before managing equipment.
</p>
{% endif %} {% endif %}
</section> </section>

View File

@@ -15,7 +15,9 @@
</label> </label>
</div> </div>
{% else %} {% else %}
<p class="empty-state">Create a scenario to view maintenance entries.</p> <p class="empty-state">
Create a <a href="scenarios">scenario</a> to view maintenance entries.
</p>
{% endif %} {% endif %}
<div id="maintenance-empty" class="empty-state"> <div id="maintenance-empty" class="empty-state">
Choose a scenario to review upcoming or completed maintenance. Choose a scenario to review upcoming or completed maintenance.
@@ -95,7 +97,8 @@
<p id="maintenance-feedback" class="feedback hidden" role="status"></p> <p id="maintenance-feedback" class="feedback hidden" role="status"></p>
{% else %} {% else %}
<p class="empty-state"> <p class="empty-state">
Create a scenario before managing maintenance entries. Create a <a href="scenarios">scenario</a> before managing maintenance
entries.
</p> </p>
{% endif %} {% endif %}
</section> </section>

View File

@@ -1,5 +1,8 @@
<footer class="site-footer"> <footer class="site-footer">
<div class="container footer-inner"> <div class="container footer-inner">
<p>&copy; {{ current_year }} CalMiner. All rights reserved.</p> <p>
&copy; {{ current_year }} CalMiner by
<a href="https://allucanget.biz/">AllYouCanGET</a>. All rights reserved.
</p>
</div> </div>
</footer> </footer>

View File

@@ -2,6 +2,7 @@
("/", "Dashboard"), ("/", "Dashboard"),
("/ui/scenarios", "Scenarios"), ("/ui/scenarios", "Scenarios"),
("/ui/parameters", "Parameters"), ("/ui/parameters", "Parameters"),
("/ui/currencies", "Currencies"),
("/ui/costs", "Costs"), ("/ui/costs", "Costs"),
("/ui/consumption", "Consumption"), ("/ui/consumption", "Consumption"),
("/ui/production", "Production"), ("/ui/production", "Production"),

View File

@@ -15,7 +15,9 @@
</label> </label>
</div> </div>
{% else %} {% else %}
<p class="empty-state">Create a scenario to view production output data.</p> <p class="empty-state">
Create a <a href="scenarios">scenario</a> to view production output data.
</p>
{% endif %} {% endif %}
<div id="production-empty" class="empty-state"> <div id="production-empty" class="empty-state">
Choose a scenario to review its production output. Choose a scenario to review its production output.
@@ -81,7 +83,9 @@
</form> </form>
<p id="production-feedback" class="feedback hidden" role="status"></p> <p id="production-feedback" class="feedback hidden" role="status"></p>
{% else %} {% else %}
<p class="empty-state">Create a scenario before adding production output.</p> <p class="empty-state">
Create a <a href="scenarios">scenario</a> before adding production output.
</p>
{% endif %} {% endif %}
</section> </section>

View File

@@ -15,7 +15,7 @@
</label> </label>
</div> </div>
{% else %} {% else %}
<p class="empty-state">Create a scenario before running simulations.</p> <p class="empty-state">Create a <a href="scenarios">scenario</a> before running simulations.</p>
{% endif %} {% endif %}
<div <div

View File

@@ -1,12 +1,14 @@
import os import os
import subprocess import subprocess
import time import time
from typing import Generator from typing import Dict, Generator
import pytest import pytest
# type: ignore[import]
from playwright.sync_api import Browser, Page, Playwright, sync_playwright from playwright.sync_api import Browser, Page, Playwright, sync_playwright
import httpx import httpx
from sqlalchemy.engine import make_url
# Use a different port for the test server to avoid conflicts # Use a different port for the test server to avoid conflicts
TEST_PORT = 8001 TEST_PORT = 8001
@@ -16,6 +18,8 @@ BASE_URL = f"http://localhost:{TEST_PORT}"
@pytest.fixture(scope="session", autouse=True) @pytest.fixture(scope="session", autouse=True)
def live_server() -> Generator[str, None, None]: def live_server() -> Generator[str, None, None]:
"""Launch a live test server in a separate process.""" """Launch a live test server in a separate process."""
env = _prepare_database_environment(os.environ.copy())
process = subprocess.Popen( process = subprocess.Popen(
[ [
"uvicorn", "uvicorn",
@@ -26,7 +30,7 @@ def live_server() -> Generator[str, None, None]:
], ],
stdout=subprocess.DEVNULL, stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL, stderr=subprocess.DEVNULL,
env=os.environ.copy(), env=env,
) )
deadline = time.perf_counter() + 30 deadline = time.perf_counter() + 30
@@ -35,7 +39,7 @@ def live_server() -> Generator[str, None, None]:
if process.poll() is not None: if process.poll() is not None:
raise RuntimeError("uvicorn server exited before becoming ready") raise RuntimeError("uvicorn server exited before becoming ready")
try: try:
response = httpx.get(BASE_URL, timeout=1.0) response = httpx.get(BASE_URL, timeout=1.0, trust_env=False)
if response.status_code < 500: if response.status_code < 500:
break break
except Exception as exc: # noqa: BLE001 except Exception as exc: # noqa: BLE001
@@ -60,6 +64,40 @@ def live_server() -> Generator[str, None, None]:
process.wait(timeout=5) process.wait(timeout=5)
@pytest.fixture(scope="session", autouse=True)
def seed_default_currencies(live_server: str) -> None:
"""Ensure a baseline set of currencies exists for UI flows."""
seeds = [
{"code": "EUR", "name": "Euro", "symbol": "EUR", "is_active": True},
{"code": "CLP", "name": "Chilean Peso", "symbol": "CLP$", "is_active": True},
]
with httpx.Client(base_url=live_server, timeout=5.0, trust_env=False) as client:
try:
response = client.get("/api/currencies/?include_inactive=true")
response.raise_for_status()
existing_codes = {
str(item.get("code"))
for item in response.json()
if isinstance(item, dict) and item.get("code")
}
except httpx.HTTPError as exc: # noqa: BLE001
raise RuntimeError("Failed to read existing currencies") from exc
for payload in seeds:
if payload["code"] in existing_codes:
continue
try:
create_response = client.post("/api/currencies/", json=payload)
except httpx.HTTPError as exc: # noqa: BLE001
raise RuntimeError("Failed to seed currencies") from exc
if create_response.status_code == 409:
continue
create_response.raise_for_status()
@pytest.fixture(scope="session") @pytest.fixture(scope="session")
def playwright_instance() -> Generator[Playwright, None, None]: def playwright_instance() -> Generator[Playwright, None, None]:
"""Provide a Playwright instance for the test session.""" """Provide a Playwright instance for the test session."""
@@ -85,3 +123,36 @@ def page(browser: Browser, live_server: str) -> Generator[Page, None, None]:
page.wait_for_load_state("networkidle") page.wait_for_load_state("networkidle")
yield page yield page
page.close() page.close()
def _prepare_database_environment(env: Dict[str, str]) -> Dict[str, str]:
"""Ensure granular database env vars are available for the app under test."""
required = ("DATABASE_HOST", "DATABASE_USER",
"DATABASE_NAME", "DATABASE_PASSWORD")
if all(env.get(key) for key in required):
return env
legacy_url = env.get("DATABASE_URL")
if not legacy_url:
return env
url = make_url(legacy_url)
env.setdefault("DATABASE_DRIVER", url.drivername)
if url.host:
env.setdefault("DATABASE_HOST", url.host)
if url.port:
env.setdefault("DATABASE_PORT", str(url.port))
if url.username:
env.setdefault("DATABASE_USER", url.username)
if url.password:
env.setdefault("DATABASE_PASSWORD", url.password)
if url.database:
env.setdefault("DATABASE_NAME", url.database)
query_options = dict(url.query) if url.query else {}
options = query_options.get("options")
if isinstance(options, str) and "search_path=" in options:
env.setdefault("DATABASE_SCHEMA", options.split("search_path=")[-1])
return env

View File

@@ -0,0 +1,130 @@
import random
import string
from playwright.sync_api import Page, expect
def _unique_currency_code(existing: set[str]) -> str:
"""Generate a unique three-letter code not present in *existing*."""
alphabet = string.ascii_uppercase
for _ in range(100):
candidate = "".join(random.choices(alphabet, k=3))
if candidate not in existing and candidate != "USD":
return candidate
raise AssertionError(
"Unable to generate a unique currency code for the test run.")
def _metric_value(page: Page, element_id: str) -> int:
locator = page.locator(f"#{element_id}")
expect(locator).to_be_visible()
return int(locator.inner_text().strip())
def _expect_feedback(page: Page, expected_text: str) -> None:
page.wait_for_function(
"expected => {"
" const el = document.getElementById('currency-form-feedback');"
" if (!el) return false;"
" const text = (el.textContent || '').trim();"
" return !el.classList.contains('hidden') && text === expected;"
"}",
arg=expected_text,
)
feedback = page.locator("#currency-form-feedback")
expect(feedback).to_have_text(expected_text)
def test_currency_workflow_create_update_toggle(page: Page) -> None:
"""Exercise create, update, and toggle flows on the currency settings page."""
page.goto("/ui/currencies")
expect(page).to_have_title("Currencies · CalMiner")
expect(page.locator("h2:has-text('Currency Overview')")).to_be_visible()
code_cells = page.locator("#currencies-table-body tr td:nth-child(1)")
existing_codes = {text.strip().upper()
for text in code_cells.all_inner_texts()}
total_before = _metric_value(page, "currency-metric-total")
active_before = _metric_value(page, "currency-metric-active")
inactive_before = _metric_value(page, "currency-metric-inactive")
new_code = _unique_currency_code(existing_codes)
new_name = f"Test Currency {new_code}"
new_symbol = new_code[0]
page.fill("#currency-form-code", new_code)
page.fill("#currency-form-name", new_name)
page.fill("#currency-form-symbol", new_symbol)
page.select_option("#currency-form-status", "true")
with page.expect_response("**/api/currencies/") as create_info:
page.click("button[type='submit']")
create_response = create_info.value
assert create_response.status == 201
_expect_feedback(page, "Currency created successfully.")
page.wait_for_function(
"expected => Number(document.getElementById('currency-metric-total').textContent.trim()) === expected",
arg=total_before + 1,
)
page.wait_for_function(
"expected => Number(document.getElementById('currency-metric-active').textContent.trim()) === expected",
arg=active_before + 1,
)
row = page.locator("#currencies-table-body tr").filter(has_text=new_code)
expect(row).to_be_visible()
expect(row.locator("td").nth(3)).to_have_text("Active")
# Switch to update mode using the existing currency option.
page.select_option("#currency-form-existing", new_code)
updated_name = f"{new_name} Updated"
updated_symbol = f"{new_symbol}$"
page.fill("#currency-form-name", updated_name)
page.fill("#currency-form-symbol", updated_symbol)
page.select_option("#currency-form-status", "false")
with page.expect_response(f"**/api/currencies/{new_code}") as update_info:
page.click("button[type='submit']")
update_response = update_info.value
assert update_response.status == 200
_expect_feedback(page, "Currency updated successfully.")
page.wait_for_function(
"expected => Number(document.getElementById('currency-metric-active').textContent.trim()) === expected",
arg=active_before,
)
page.wait_for_function(
"expected => Number(document.getElementById('currency-metric-inactive').textContent.trim()) === expected",
arg=inactive_before + 1,
)
expect(row.locator("td").nth(1)).to_have_text(updated_name)
expect(row.locator("td").nth(2)).to_have_text(updated_symbol)
expect(row.locator("td").nth(3)).to_contain_text("Inactive")
toggle_button = row.locator("button[data-action='toggle']")
expect(toggle_button).to_have_text("Activate")
with page.expect_response(f"**/api/currencies/{new_code}/activation") as toggle_info:
toggle_button.click()
toggle_response = toggle_info.value
assert toggle_response.status == 200
page.wait_for_function(
"expected => Number(document.getElementById('currency-metric-active').textContent.trim()) === expected",
arg=active_before + 1,
)
page.wait_for_function(
"expected => Number(document.getElementById('currency-metric-inactive').textContent.trim()) === expected",
arg=inactive_before,
)
_expect_feedback(page, f"Currency {new_code} activated.")
expect(row.locator("td").nth(3)).to_contain_text("Active")
expect(row.locator("button[data-action='toggle']")
).to_have_text("Deactivate")

View File

@@ -14,6 +14,7 @@ UI_ROUTES = [
("/ui/maintenance", "Maintenance · CalMiner", "Maintenance Schedule"), ("/ui/maintenance", "Maintenance · CalMiner", "Maintenance Schedule"),
("/ui/simulations", "Simulations · CalMiner", "Monte Carlo Simulations"), ("/ui/simulations", "Simulations · CalMiner", "Monte Carlo Simulations"),
("/ui/reporting", "Reporting · CalMiner", "Scenario KPI Summary"), ("/ui/reporting", "Reporting · CalMiner", "Scenario KPI Summary"),
("/ui/currencies", "Currencies · CalMiner", "Currency Overview"),
] ]

View File

@@ -0,0 +1,101 @@
from typing import Dict
import pytest
from models.currency import Currency
@pytest.fixture(autouse=True)
def _cleanup_currencies(db_session):
db_session.query(Currency).delete()
db_session.commit()
yield
db_session.query(Currency).delete()
db_session.commit()
def _assert_currency(payload: Dict[str, object], code: str, name: str, symbol: str | None, is_active: bool) -> None:
assert payload["code"] == code
assert payload["name"] == name
assert payload["is_active"] is is_active
if symbol is None:
assert payload["symbol"] is None
else:
assert payload["symbol"] == symbol
def test_list_returns_default_currency(api_client, db_session):
response = api_client.get("/api/currencies/")
assert response.status_code == 200
data = response.json()
assert any(item["code"] == "USD" for item in data)
def test_create_currency_success(api_client, db_session):
payload = {"code": "EUR", "name": "Euro", "symbol": "", "is_active": True}
response = api_client.post("/api/currencies/", json=payload)
assert response.status_code == 201
data = response.json()
_assert_currency(data, "EUR", "Euro", "", True)
stored = db_session.query(Currency).filter_by(code="EUR").one()
assert stored.name == "Euro"
assert stored.symbol == ""
assert stored.is_active is True
def test_create_currency_conflict(api_client, db_session):
api_client.post(
"/api/currencies/",
json={"code": "CAD", "name": "Canadian Dollar",
"symbol": "$", "is_active": True},
)
duplicate = api_client.post(
"/api/currencies/",
json={"code": "CAD", "name": "Canadian Dollar",
"symbol": "$", "is_active": True},
)
assert duplicate.status_code == 409
def test_update_currency_fields(api_client, db_session):
api_client.post(
"/api/currencies/",
json={"code": "GBP", "name": "British Pound",
"symbol": "£", "is_active": True},
)
response = api_client.put(
"/api/currencies/GBP",
json={"name": "Pound Sterling", "symbol": "£", "is_active": False},
)
assert response.status_code == 200
data = response.json()
_assert_currency(data, "GBP", "Pound Sterling", "£", False)
def test_toggle_currency_activation(api_client, db_session):
api_client.post(
"/api/currencies/",
json={"code": "AUD", "name": "Australian Dollar",
"symbol": "A$", "is_active": True},
)
response = api_client.patch(
"/api/currencies/AUD/activation",
json={"is_active": False},
)
assert response.status_code == 200
data = response.json()
_assert_currency(data, "AUD", "Australian Dollar", "A$", False)
def test_default_currency_cannot_be_deactivated(api_client, db_session):
api_client.get("/api/currencies/")
response = api_client.patch(
"/api/currencies/USD/activation",
json={"is_active": False},
)
assert response.status_code == 400
assert response.json()[
"detail"] == "The default currency cannot be deactivated."

View File

@@ -1,54 +1,74 @@
from tests.unit.conftest import client from uuid import uuid4
import pytest
from models.currency import Currency
def test_create_capex_with_currency_code_and_list(): @pytest.fixture
# create scenario first (reuse helper from other tests) def seeded_currency(db_session):
from tests.unit.test_costs import _create_scenario currency = Currency(code="GBP", name="British Pound", symbol="GBP")
db_session.add(currency)
db_session.commit()
db_session.refresh(currency)
sid = _create_scenario() try:
yield currency
finally:
db_session.delete(currency)
db_session.commit()
def _create_scenario(api_client):
payload = {
"name": f"CurrencyScenario-{uuid4()}",
"description": "Currency workflow scenario",
}
resp = api_client.post("/api/scenarios/", json=payload)
assert resp.status_code == 200
return resp.json()["id"]
def test_create_capex_with_currency_code_and_list(api_client, seeded_currency):
sid = _create_scenario(api_client)
# create with currency_code
payload = { payload = {
"scenario_id": sid, "scenario_id": sid,
"amount": 500.0, "amount": 500.0,
"description": "Capex with GBP", "description": "Capex with GBP",
"currency_code": "GBP", "currency_code": seeded_currency.code,
} }
resp = client.post("/api/costs/capex", json=payload) resp = api_client.post("/api/costs/capex", json=payload)
assert resp.status_code == 200 assert resp.status_code == 200
data = resp.json() data = resp.json()
assert data["currency_code"] == "GBP" or data.get( assert data.get("currency_code") == seeded_currency.code or data.get(
"currency", {}).get("code") == "GBP" "currency", {}
).get("code") == seeded_currency.code
def test_create_opex_with_currency_id(): def test_create_opex_with_currency_id(api_client, seeded_currency):
from tests.unit.test_costs import _create_scenario sid = _create_scenario(api_client)
from routes.currencies import list_currencies
sid = _create_scenario() resp = api_client.get("/api/currencies/")
# fetch currencies to get an id
resp = client.get("/api/currencies/")
assert resp.status_code == 200 assert resp.status_code == 200
currencies = resp.json() currencies = resp.json()
assert len(currencies) > 0 assert any(c["id"] == seeded_currency.id for c in currencies)
cid = currencies[0]["id"]
payload = { payload = {
"scenario_id": sid, "scenario_id": sid,
"amount": 120.0, "amount": 120.0,
"description": "Opex with explicit id", "description": "Opex with explicit id",
"currency_id": cid, "currency_id": seeded_currency.id,
} }
resp = client.post("/api/costs/opex", json=payload) resp = api_client.post("/api/costs/opex", json=payload)
assert resp.status_code == 200 assert resp.status_code == 200
data = resp.json() data = resp.json()
assert data["currency_id"] == cid assert data["currency_id"] == seeded_currency.id
def test_list_currencies_endpoint(): def test_list_currencies_endpoint(api_client, seeded_currency):
resp = client.get("/api/currencies/") resp = api_client.get("/api/currencies/")
assert resp.status_code == 200 assert resp.status_code == 200
data = resp.json() data = resp.json()
assert isinstance(data, list) assert isinstance(data, list)
assert all("id" in c and "code" in c for c in data) assert any(c["id"] == seeded_currency.id for c in data)

View File

@@ -0,0 +1,459 @@
import argparse
from unittest import mock
import psycopg2
import pytest
from psycopg2 import errors as psycopg_errors
import scripts.setup_database as setup_db_module
from scripts import seed_data
from scripts.setup_database import DatabaseConfig, DatabaseSetup
@pytest.fixture()
def mock_config() -> DatabaseConfig:
return DatabaseConfig(
driver="postgresql",
host="localhost",
port=5432,
database="calminer_test",
user="calminer",
password="secret",
schema="public",
admin_user="postgres",
admin_password="secret",
)
@pytest.fixture()
def setup_instance(mock_config: DatabaseConfig) -> DatabaseSetup:
return DatabaseSetup(mock_config, dry_run=True)
def test_seed_baseline_data_dry_run_skips_verification(setup_instance: DatabaseSetup) -> None:
with mock.patch("scripts.seed_data.run_with_namespace") as seed_run, mock.patch.object(
setup_instance, "_verify_seeded_data"
) as verify_mock:
setup_instance.seed_baseline_data(dry_run=True)
seed_run.assert_called_once()
namespace_arg = seed_run.call_args[0][0]
assert isinstance(namespace_arg, argparse.Namespace)
assert namespace_arg.dry_run is True
assert namespace_arg.currencies is True
assert namespace_arg.units is True
assert seed_run.call_args.kwargs["config"] is setup_instance.config
verify_mock.assert_not_called()
def test_seed_baseline_data_invokes_verification(setup_instance: DatabaseSetup) -> None:
expected_currencies = {code for code, *_ in seed_data.CURRENCY_SEEDS}
expected_units = {code for code, *_ in seed_data.MEASUREMENT_UNIT_SEEDS}
with mock.patch("scripts.seed_data.run_with_namespace") as seed_run, mock.patch.object(
setup_instance, "_verify_seeded_data"
) as verify_mock:
setup_instance.seed_baseline_data(dry_run=False)
seed_run.assert_called_once()
namespace_arg = seed_run.call_args[0][0]
assert isinstance(namespace_arg, argparse.Namespace)
assert namespace_arg.dry_run is False
assert seed_run.call_args.kwargs["config"] is setup_instance.config
verify_mock.assert_called_once_with(
expected_currency_codes=expected_currencies,
expected_unit_codes=expected_units,
)
def test_run_migrations_applies_baseline_when_missing(mock_config: DatabaseConfig, tmp_path) -> None:
setup_instance = DatabaseSetup(mock_config, dry_run=False)
baseline = tmp_path / "000_base.sql"
baseline.write_text("SELECT 1;", encoding="utf-8")
other_migration = tmp_path / "20251022_add_other.sql"
other_migration.write_text("SELECT 2;", encoding="utf-8")
migration_calls: list[str] = []
def capture_migration(cursor, schema_name: str, path):
migration_calls.append(path.name)
return path.name
connection_mock = mock.MagicMock()
connection_mock.__enter__.return_value = connection_mock
cursor_context = mock.MagicMock()
cursor_mock = mock.MagicMock()
cursor_context.__enter__.return_value = cursor_mock
connection_mock.cursor.return_value = cursor_context
with mock.patch.object(
setup_instance, "_application_connection", return_value=connection_mock
), mock.patch.object(
setup_instance, "_migrations_table_exists", return_value=True
), mock.patch.object(
setup_instance, "_fetch_applied_migrations", return_value=set()
), mock.patch.object(
setup_instance, "_apply_migration_file", side_effect=capture_migration
) as apply_mock:
setup_instance.run_migrations(tmp_path)
assert apply_mock.call_count == 1
assert migration_calls == ["000_base.sql"]
legacy_marked = any(
call.args[1] == ("20251022_add_other.sql",)
for call in cursor_mock.execute.call_args_list
if len(call.args) == 2
)
assert legacy_marked
def test_run_migrations_noop_when_all_files_already_applied(
mock_config: DatabaseConfig, tmp_path
) -> None:
setup_instance = DatabaseSetup(mock_config, dry_run=False)
baseline = tmp_path / "000_base.sql"
baseline.write_text("SELECT 1;", encoding="utf-8")
other_migration = tmp_path / "20251022_add_other.sql"
other_migration.write_text("SELECT 2;", encoding="utf-8")
connection_mock, cursor_mock = _connection_with_cursor()
with mock.patch.object(
setup_instance, "_application_connection", return_value=connection_mock
), mock.patch.object(
setup_instance, "_migrations_table_exists", return_value=True
), mock.patch.object(
setup_instance,
"_fetch_applied_migrations",
return_value={"000_base.sql", "20251022_add_other.sql"},
), mock.patch.object(
setup_instance, "_apply_migration_file"
) as apply_mock:
setup_instance.run_migrations(tmp_path)
apply_mock.assert_not_called()
cursor_mock.execute.assert_not_called()
def _connection_with_cursor() -> tuple[mock.MagicMock, mock.MagicMock]:
connection_mock = mock.MagicMock()
connection_mock.__enter__.return_value = connection_mock
cursor_context = mock.MagicMock()
cursor_mock = mock.MagicMock()
cursor_context.__enter__.return_value = cursor_mock
connection_mock.cursor.return_value = cursor_context
return connection_mock, cursor_mock
def test_verify_seeded_data_raises_when_currency_missing(mock_config: DatabaseConfig) -> None:
setup_instance = DatabaseSetup(mock_config, dry_run=False)
connection_mock, cursor_mock = _connection_with_cursor()
cursor_mock.fetchall.return_value = [("USD", True)]
with mock.patch.object(setup_instance, "_application_connection", return_value=connection_mock):
with pytest.raises(RuntimeError) as exc:
setup_instance._verify_seeded_data(
expected_currency_codes={"USD", "EUR"},
expected_unit_codes=set(),
)
assert "EUR" in str(exc.value)
def test_verify_seeded_data_raises_when_default_currency_inactive(mock_config: DatabaseConfig) -> None:
setup_instance = DatabaseSetup(mock_config, dry_run=False)
connection_mock, cursor_mock = _connection_with_cursor()
cursor_mock.fetchall.return_value = [("USD", False)]
with mock.patch.object(setup_instance, "_application_connection", return_value=connection_mock):
with pytest.raises(RuntimeError) as exc:
setup_instance._verify_seeded_data(
expected_currency_codes={"USD"},
expected_unit_codes=set(),
)
assert "inactive" in str(exc.value)
def test_verify_seeded_data_raises_when_units_missing(mock_config: DatabaseConfig) -> None:
setup_instance = DatabaseSetup(mock_config, dry_run=False)
connection_mock, cursor_mock = _connection_with_cursor()
cursor_mock.fetchall.return_value = [("tonnes", True)]
with mock.patch.object(setup_instance, "_application_connection", return_value=connection_mock):
with pytest.raises(RuntimeError) as exc:
setup_instance._verify_seeded_data(
expected_currency_codes=set(),
expected_unit_codes={"tonnes", "liters"},
)
assert "liters" in str(exc.value)
def test_verify_seeded_data_raises_when_measurement_table_missing(mock_config: DatabaseConfig) -> None:
setup_instance = DatabaseSetup(mock_config, dry_run=False)
connection_mock, cursor_mock = _connection_with_cursor()
cursor_mock.execute.side_effect = psycopg_errors.UndefinedTable("relation does not exist")
with mock.patch.object(setup_instance, "_application_connection", return_value=connection_mock):
with pytest.raises(RuntimeError) as exc:
setup_instance._verify_seeded_data(
expected_currency_codes=set(),
expected_unit_codes={"tonnes"},
)
assert "measurement_unit" in str(exc.value)
connection_mock.rollback.assert_called_once()
def test_seed_baseline_data_rerun_uses_existing_records(
mock_config: DatabaseConfig,
) -> None:
setup_instance = DatabaseSetup(mock_config, dry_run=False)
connection_mock, cursor_mock = _connection_with_cursor()
currency_rows = [(code, True) for code, *_ in seed_data.CURRENCY_SEEDS]
unit_rows = [(code, True) for code, *_ in seed_data.MEASUREMENT_UNIT_SEEDS]
cursor_mock.fetchall.side_effect = [
currency_rows,
unit_rows,
currency_rows,
unit_rows,
]
with mock.patch.object(
setup_instance, "_application_connection", return_value=connection_mock
), mock.patch("scripts.seed_data.run_with_namespace") as seed_run:
setup_instance.seed_baseline_data(dry_run=False)
setup_instance.seed_baseline_data(dry_run=False)
assert seed_run.call_count == 2
first_namespace = seed_run.call_args_list[0].args[0]
assert isinstance(first_namespace, argparse.Namespace)
assert first_namespace.dry_run is False
assert seed_run.call_args_list[0].kwargs["config"] is setup_instance.config
assert cursor_mock.execute.call_count == 4
def test_ensure_database_raises_with_context(mock_config: DatabaseConfig) -> None:
setup_instance = DatabaseSetup(mock_config, dry_run=False)
connection_mock = mock.MagicMock()
cursor_mock = mock.MagicMock()
cursor_mock.fetchone.return_value = None
cursor_mock.execute.side_effect = [None, psycopg2.Error("create_fail")]
connection_mock.cursor.return_value = cursor_mock
with mock.patch.object(setup_instance, "_admin_connection", return_value=connection_mock):
with pytest.raises(RuntimeError) as exc:
setup_instance.ensure_database()
assert "Failed to create database" in str(exc.value)
def test_ensure_role_raises_with_context_during_creation(mock_config: DatabaseConfig) -> None:
setup_instance = DatabaseSetup(mock_config, dry_run=False)
admin_conn, admin_cursor = _connection_with_cursor()
admin_cursor.fetchone.return_value = None
admin_cursor.execute.side_effect = [None, psycopg2.Error("role_fail")]
with mock.patch.object(
setup_instance,
"_admin_connection",
side_effect=[admin_conn],
):
with pytest.raises(RuntimeError) as exc:
setup_instance.ensure_role()
assert "Failed to create role" in str(exc.value)
def test_ensure_role_raises_with_context_during_privilege_grants(
mock_config: DatabaseConfig,
) -> None:
setup_instance = DatabaseSetup(mock_config, dry_run=False)
admin_conn, admin_cursor = _connection_with_cursor()
admin_cursor.fetchone.return_value = (1,)
privilege_conn, privilege_cursor = _connection_with_cursor()
privilege_cursor.execute.side_effect = [psycopg2.Error("grant_fail")]
with mock.patch.object(
setup_instance,
"_admin_connection",
side_effect=[admin_conn, privilege_conn],
):
with pytest.raises(RuntimeError) as exc:
setup_instance.ensure_role()
assert "Failed to grant privileges" in str(exc.value)
def test_ensure_database_dry_run_skips_creation(mock_config: DatabaseConfig) -> None:
setup_instance = DatabaseSetup(mock_config, dry_run=True)
connection_mock = mock.MagicMock()
cursor_mock = mock.MagicMock()
cursor_mock.fetchone.return_value = None
connection_mock.cursor.return_value = cursor_mock
with mock.patch.object(setup_instance, "_admin_connection", return_value=connection_mock), mock.patch(
"scripts.setup_database.logger"
) as logger_mock:
setup_instance.ensure_database()
# expect only existence check, no create attempt
cursor_mock.execute.assert_called_once()
logger_mock.info.assert_any_call(
"Dry run: would create database '%s'. Run without --dry-run to proceed.", mock_config.database
)
def test_ensure_role_dry_run_skips_creation_and_grants(mock_config: DatabaseConfig) -> None:
setup_instance = DatabaseSetup(mock_config, dry_run=True)
admin_conn, admin_cursor = _connection_with_cursor()
admin_cursor.fetchone.return_value = None
with mock.patch.object(
setup_instance,
"_admin_connection",
side_effect=[admin_conn],
) as conn_mock, mock.patch("scripts.setup_database.logger") as logger_mock:
setup_instance.ensure_role()
assert conn_mock.call_count == 1
admin_cursor.execute.assert_called_once()
logger_mock.info.assert_any_call(
"Dry run: would create role '%s'. Run without --dry-run to apply.", mock_config.user
)
def test_register_rollback_skipped_when_dry_run(mock_config: DatabaseConfig) -> None:
setup_instance = DatabaseSetup(mock_config, dry_run=True)
setup_instance._register_rollback("noop", lambda: None)
assert setup_instance._rollback_actions == []
def test_execute_rollbacks_runs_in_reverse_order(mock_config: DatabaseConfig) -> None:
setup_instance = DatabaseSetup(mock_config, dry_run=False)
calls: list[str] = []
def first_action() -> None:
calls.append("first")
def second_action() -> None:
calls.append("second")
setup_instance._register_rollback("first", first_action)
setup_instance._register_rollback("second", second_action)
with mock.patch("scripts.setup_database.logger"):
setup_instance.execute_rollbacks()
assert calls == ["second", "first"]
assert setup_instance._rollback_actions == []
def test_ensure_database_registers_rollback_action(mock_config: DatabaseConfig) -> None:
setup_instance = DatabaseSetup(mock_config, dry_run=False)
connection_mock = mock.MagicMock()
cursor_mock = mock.MagicMock()
cursor_mock.fetchone.return_value = None
connection_mock.cursor.return_value = cursor_mock
with mock.patch.object(setup_instance, "_admin_connection", return_value=connection_mock), mock.patch.object(
setup_instance, "_register_rollback"
) as register_mock, mock.patch.object(setup_instance, "_drop_database") as drop_mock:
setup_instance.ensure_database()
register_mock.assert_called_once()
label, action = register_mock.call_args[0]
assert "drop database" in label
action()
drop_mock.assert_called_once_with(mock_config.database)
def test_ensure_role_registers_rollback_actions(mock_config: DatabaseConfig) -> None:
setup_instance = DatabaseSetup(mock_config, dry_run=False)
admin_conn, admin_cursor = _connection_with_cursor()
admin_cursor.fetchone.return_value = None
privilege_conn, privilege_cursor = _connection_with_cursor()
with mock.patch.object(
setup_instance,
"_admin_connection",
side_effect=[admin_conn, privilege_conn],
), mock.patch.object(
setup_instance, "_register_rollback"
) as register_mock, mock.patch.object(
setup_instance, "_drop_role"
) as drop_mock, mock.patch.object(
setup_instance, "_revoke_role_privileges"
) as revoke_mock:
setup_instance.ensure_role()
assert register_mock.call_count == 2
drop_label, drop_action = register_mock.call_args_list[0][0]
revoke_label, revoke_action = register_mock.call_args_list[1][0]
assert "drop role" in drop_label
assert "revoke privileges" in revoke_label
drop_action()
drop_mock.assert_called_once_with(mock_config.user)
revoke_action()
revoke_mock.assert_called_once()
def test_main_triggers_rollbacks_on_failure(mock_config: DatabaseConfig) -> None:
args = argparse.Namespace(
ensure_database=True,
ensure_role=True,
ensure_schema=False,
initialize_schema=False,
run_migrations=False,
seed_data=False,
migrations_dir=None,
db_driver=None,
db_host=None,
db_port=None,
db_name=None,
db_user=None,
db_password=None,
db_schema=None,
admin_url=None,
admin_user=None,
admin_password=None,
admin_db=None,
dry_run=False,
verbose=0,
)
with mock.patch.object(setup_db_module, "parse_args", return_value=args), mock.patch.object(
setup_db_module.DatabaseConfig, "from_env", return_value=mock_config
), mock.patch.object(
setup_db_module, "DatabaseSetup"
) as setup_cls:
setup_instance = mock.MagicMock()
setup_instance.dry_run = False
setup_instance._rollback_actions = [
("drop role", mock.MagicMock()),
]
setup_instance.ensure_database.side_effect = RuntimeError("boom")
setup_instance.execute_rollbacks = mock.MagicMock()
setup_instance.clear_rollbacks = mock.MagicMock()
setup_cls.return_value = setup_instance
with pytest.raises(RuntimeError):
setup_db_module.main()
setup_instance.execute_rollbacks.assert_called_once()
setup_instance.clear_rollbacks.assert_called_once()