75 Commits

Author SHA1 Message Date
971b4a19ea moved docs to own repo
Some checks failed
CI / build (push) Has been cancelled
CI / test (push) Has been cancelled
2025-11-09 13:25:51 +01:00
5b1278cbea fix: add echo statement to log pip cache directory in CI workflow 2025-11-06 12:08:26 +01:00
b6511e5273 feat: add build job to CI workflow for Docker image creation and pushing
All checks were successful
CI / test (push) Successful in 12m19s
CI / build (push) Successful in 3m27s
2025-11-06 12:06:05 +01:00
bcb15bd0e4 chore: remove obsolete CI workflow configuration 2025-11-06 11:59:48 +01:00
42f8714d71 feat: add CI workflow configuration for testing and building
All checks were successful
CI / test (push) Successful in 14m26s
2025-11-06 11:44:35 +01:00
1881ebe24f fix: correct syntax for environment variable references in CI workflow
All checks were successful
CI / test (push) Successful in 6m11s
2025-11-04 21:36:05 +01:00
d90aae3d0a style: update CSS variables and styles for improved theming and consistency
Some checks failed
CI / test (push) Failing after 4m1s
2025-11-04 21:23:52 +01:00
9934d1483d Merge pull request 'feat/ci-overhaul-20251029' (#11) from feat/ci-overhaul-20251029 into main
Some checks failed
CI / test (push) Failing after 2m5s
Reviewed-on: #11
2025-11-02 18:13:14 +01:00
df1c971354 Merge https://git.allucanget.biz/allucanget/calminer into feat/ci-overhaul-20251029
Some checks failed
CI / test (pull_request) Failing after 2m15s
2025-11-02 16:29:25 +01:00
3a8aef04b0 fix: update database connection details in CI workflow for consistency 2025-11-02 16:29:19 +01:00
45d746d80a Merge pull request 'fix: update UVICORN_PORT and UVICORN_WORKERS in Dockerfile for consistency' (#10) from feat/ci-overhaul-20251029 into main
Some checks failed
CI / test (push) Failing after 4m22s
Reviewed-on: #10
2025-11-02 15:59:22 +01:00
f1bc7f06b9 fix: hardcode database connection details in CI workflow for consistency
Some checks failed
CI / test (pull_request) Failing after 1m54s
2025-11-02 13:40:06 +01:00
82e98efb1b fix: remove DB_PORT variable and use hardcoded value in CI workflow for consistency
Some checks failed
CI / test (pull_request) Failing after 2m24s
2025-11-02 13:09:09 +01:00
f91349dedd Merge branch 'main' into feat/ci-overhaul-20251029
Some checks failed
CI / test (pull_request) Failing after 2m47s
2025-11-02 13:02:18 +01:00
efee50fdc7 fix: update UVICORN_PORT and UVICORN_WORKERS in Dockerfile for consistency
Some checks failed
CI / test (pull_request) Failing after 2m39s
2025-11-02 12:23:26 +01:00
e254d50c0c Merge pull request 'fix: refactor database environment variables in CI workflow for consistency' (#9) from feat/ci-overhaul-20251029 into main
Some checks failed
CI / test (push) Failing after 1m55s
Reviewed-on: #9
2025-11-02 11:21:15 +01:00
6eef8424b7 fix: update DB_PORT to be a string in CI workflow for consistency
Some checks failed
CI / test (pull_request) Failing after 2m15s
2025-11-02 11:13:45 +01:00
c1f4902cf4 fix: update UVICORN_PORT to 8003 in Dockerfile and docker-compose.yml
Some checks failed
CI / test (pull_request) Failing after 3m19s
2025-11-02 11:07:28 +01:00
52450bc487 fix: refactor database environment variables in CI workflow for consistency
Some checks failed
CI / test (pull_request) Failing after 6m30s
2025-10-29 15:34:06 +01:00
c3449f1986 Merge pull request 'Add UI and styling documentation; remove idempotency and logging audits' (#8) from feat/ci-overhaul-20251029 into main
All checks were successful
CI / test (push) Successful in 2m22s
Reviewed-on: #8
2025-10-29 14:26:14 +01:00
f863808940 fix: update .gitignore to include ruff cache and clarify act runner files
All checks were successful
CI / test (pull_request) Successful in 2m21s
2025-10-29 14:23:24 +01:00
37646b571a fix: update system dependencies in CI workflow
Some checks failed
CI / test (pull_request) Failing after 2m51s
2025-10-29 13:57:22 +01:00
22f43bed56 fix: update CI workflow to configure apt-cacher-ng and install system dependencies
All checks were successful
CI / test (pull_request) Successful in 3m26s
2025-10-29 13:54:41 +01:00
72cf06a31d feat: add step to install Playwright browsers in CI workflow
Some checks failed
CI / test (pull_request) Failing after 1m10s
2025-10-29 13:39:02 +01:00
b796a053d6 fix: update database host in CI workflow to use service name
Some checks failed
CI / test (pull_request) Failing after 19s
2025-10-29 13:30:56 +01:00
04d7f202b6 Add UI and styling documentation; remove idempotency and logging audits
Some checks failed
CI / test (pull_request) Failing after 1m8s
- Introduced a new document outlining UI structure, reusable template components, CSS variable conventions, and per-page data/actions for the CalMiner application.
- Removed outdated idempotency audit and logging audit documents as they are no longer relevant.
- Updated quickstart guide to streamline developer setup instructions and link to relevant documentation.
- Created a roadmap document detailing scenario enhancements and data management strategies.
- Deleted the seed data plan document to consolidate information into the setup process.
- Refactored setup_database.py for improved logging and error handling during database setup and migration processes.
2025-10-29 13:20:44 +01:00
1f58de448c fix: container/compose/CI overhaul 2025-10-28 18:42:37 +01:00
807204869f fix: Improve database connection retry logic with detailed error messages
All checks were successful
Run Tests / Lint (push) Successful in 35s
Run Tests / Unit Tests (push) Successful in 47s
2025-10-28 15:04:52 +01:00
ddb23b1da0 fix: Update deployment script to use fallback branch for image tagging 2025-10-28 15:03:21 +01:00
26e231d63f Merge pull request 'fix: Enhance workflow conditions for E2E tests and deployment processes' (#7) from fest/ci-improvement into main
All checks were successful
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 42s
Reviewed-on: #7
2025-10-28 14:44:14 +01:00
d98d6ebe83 fix: Enhance workflow conditions for E2E tests and deployment processes
All checks were successful
Run E2E Tests / E2E Tests (push) Successful in 1m16s
Run Tests / Lint (push) Successful in 35s
Run Tests / Unit Tests (push) Successful in 42s
2025-10-28 14:41:00 +01:00
e881be52b5 Merge pull request 'feat/ci-improvement' (#6) from fest/ci-improvement into main
All checks were successful
Run E2E Tests / E2E Tests (push) Successful in 1m16s
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 41s
Reviewed-on: #6
2025-10-28 14:16:47 +01:00
cc8efa3eab Merge https://git.allucanget.biz/allucanget/calminer into fest/ci-improvement
All checks were successful
Run E2E Tests / E2E Tests (push) Successful in 1m17s
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 43s
2025-10-28 14:12:32 +01:00
29a17595da fix: Update E2E test workflow conditions and branch ignore settings
Some checks failed
Run Tests / Lint (push) Has been cancelled
Run Tests / Unit Tests (push) Has been cancelled
Run E2E Tests / E2E Tests (push) Has been cancelled
2025-10-28 14:11:36 +01:00
a0431cb630 Merge pull request 'refactor: Update workflow triggers for E2E tests and deployment processes' (#5) from fest/ci-improvement into main
All checks were successful
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 42s
Reviewed-on: #5
2025-10-28 13:55:34 +01:00
f1afcaa78b Merge https://git.allucanget.biz/allucanget/calminer into fest/ci-improvement
All checks were successful
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 42s
2025-10-28 13:54:07 +01:00
36da0609ed refactor: Update workflow triggers for E2E tests and deployment processes
All checks were successful
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 42s
2025-10-28 13:23:25 +01:00
26843104ee fix: Update workflow names and conditions for E2E tests
All checks were successful
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 42s
2025-10-28 11:26:41 +01:00
eb509e3dd2 Merge pull request 'feat/ci-improvement' (#4) from fest/ci-improvement into main
All checks were successful
Run E2E Tests / E2E Tests (push) Successful in 1m16s
Run Tests / Lint (push) Successful in 38s
Run Tests / Unit Tests (push) Successful in 42s
Reviewed-on: #4
2025-10-28 09:07:57 +01:00
51aa2fa71d Merge branch 'main' into fest/ci-improvement
All checks were successful
Run E2E Tests / E2E Tests (push) Successful in 1m16s
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 44s
Run E2E Tests / E2E Tests (pull_request) Successful in 1m21s
2025-10-28 09:00:05 +01:00
e1689c3a31 fix: Update pydantic version constraint in requirements.txt
All checks were successful
Run E2E Tests / E2E Tests (push) Successful in 1m16s
Run Tests / Lint (push) Successful in 52s
Run Tests / Unit Tests (push) Successful in 41s
Run E2E Tests / E2E Tests (pull_request) Successful in 1m16s
2025-10-28 08:52:37 +01:00
99d9ea7770 fix: Downgrade upload-artifact action to v3 for consistency
Some checks failed
Run E2E Tests / E2E Tests (push) Successful in 3m48s
Run Tests / Lint (push) Successful in 1m18s
Run Tests / Unit Tests (push) Failing after 57s
2025-10-28 08:34:27 +01:00
2136dbdd44 fix: Ensure bash shell is explicitly set for running E2E tests
Some checks failed
Run E2E Tests / E2E Tests (push) Failing after 1m47s
Run Tests / Lint (push) Successful in 50s
Run Tests / Unit Tests (push) Successful in 1m11s
2025-10-28 08:29:12 +01:00
3da8a50ac4 feat: Add E2E testing workflow with Playwright and PostgreSQL service
Some checks failed
Run E2E Tests / E2E Tests (push) Failing after 5m12s
Run Tests / Lint (push) Successful in 37s
Run Tests / Unit Tests (push) Successful in 44s
2025-10-28 08:19:07 +01:00
a772960390 feat: Add option to create isolated virtual environment in Python setup action
All checks were successful
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 42s
Run Tests / E2E Tests (push) Successful in 12m58s
2025-10-28 07:56:24 +01:00
89a4f663b5 feat: Add virtual environment creation step for Python setup
Some checks failed
Run Tests / Lint (push) Successful in 36s
Run Tests / Unit Tests (push) Successful in 42s
Run Tests / E2E Tests (push) Failing after 9m29s
2025-10-28 07:42:25 +01:00
50446c4248 feat: Refactor test workflow to separate lint, unit, and e2e jobs with health checks for PostgreSQL service
Some checks failed
Run Tests / Lint (push) Failing after 4s
Run Tests / Unit Tests (push) Failing after 5s
Run Tests / E2E Tests (push) Successful in 8m42s
2025-10-28 06:49:22 +01:00
c5a9a7c96f fix: Remove conditional execution for Node.js runtime installation in test workflow
All checks were successful
Run Tests / e2e tests (push) Successful in 1m17s
Run Tests / lint tests (push) Successful in 1m49s
Run Tests / unit tests (push) Successful in 55s
2025-10-27 22:07:31 +01:00
723f6a62b8 feat: Enhance CI workflows with health checks and update PostgreSQL image version
Some checks failed
Run Tests / e2e tests (push) Successful in 1m33s
Run Tests / lint tests (push) Failing after 2s
Run Tests / unit tests (push) Failing after 2s
2025-10-27 21:12:46 +01:00
dcb08ab1b8 feat: Add production and development Docker Compose configurations, health check endpoint, and update documentation 2025-10-27 20:57:36 +01:00
a6a5f630cc feat: Add initial Docker Compose configuration for API service 2025-10-27 19:46:35 +01:00
b56045ca6a feat: Add Docker Compose configuration for testing and API services 2025-10-27 19:44:43 +01:00
2f07e6fb75 fix: Update Playwright Python container version to v1.55.0
All checks were successful
Run Tests / e2e tests (push) Successful in 3m1s
Run Tests / lint tests (push) Successful in 1m5s
Run Tests / unit tests (push) Successful in 57s
2025-10-27 19:07:10 +01:00
1f8a595243 fix: Export PYTHONPATH to GitHub environment for test workflows
Some checks failed
Run Tests / e2e tests (push) Failing after 55s
Run Tests / lint tests (push) Successful in 1m58s
Run Tests / unit tests (push) Successful in 2m1s
2025-10-27 18:58:18 +01:00
54137b88d7 feat: Enhance Python environment setup with system Python option and improve dependency installation
Some checks failed
Run Tests / e2e tests (push) Failing after 50s
Run Tests / lint tests (push) Failing after 1m53s
Run Tests / unit tests (push) Failing after 2m25s
refactor: Clean up imports in currencies and users routes
fix: Update theme settings saving logic and clean up test imports
2025-10-27 18:39:20 +01:00
7385bdad3e feat: Add theme normalization and API integration for theme settings
Some checks failed
Run Tests / e2e tests (push) Failing after 20s
Run Tests / lint tests (push) Failing after 21s
Run Tests / unit tests (push) Failing after 21s
2025-10-27 18:04:15 +01:00
7d0c8bfc53 fix: Improve proxy configuration handling in setup action
Some checks failed
Run Tests / e2e tests (push) Failing after 20s
Run Tests / lint tests (push) Failing after 21s
Run Tests / unit tests (push) Failing after 22s
2025-10-27 16:47:59 +01:00
a861efeabf fix: Add Node.js runtime installation step to test workflow
Some checks failed
Run Tests / e2e tests (push) Failing after 21s
Run Tests / lint tests (push) Failing after 22s
Run Tests / unit tests (push) Failing after 21s
2025-10-27 15:39:53 +01:00
2f5306b793 fix: Update container configuration for test jobs to use specific Playwright image
Some checks failed
Run Tests / e2e tests (push) Failing after 1m26s
Run Tests / lint tests (push) Failing after 2s
Run Tests / unit tests (push) Failing after 2s
2025-10-27 15:29:05 +01:00
573e255769 fix: Enhance argument handling in seed data script and add unit tests
Some checks failed
Run Tests / e2e tests (push) Failing after 2s
Run Tests / lint tests (push) Failing after 2s
Run Tests / unit tests (push) Failing after 2s
2025-10-27 15:12:50 +01:00
8bb5456864 fix: Update container condition for e2e tests in workflow 2025-10-27 14:59:44 +01:00
b1d50a56e0 feat: Consolidate user, role, and theme settings tables into a single migration file
Some checks failed
Run Tests / e2e tests (push) Failing after 3s
Run Tests / lint tests (push) Failing after 1m30s
Run Tests / unit tests (push) Failing after 1m32s
2025-10-27 14:56:37 +01:00
e37488bcf6 fix: Comment out pip dependency caching in test workflow
Some checks failed
Run Tests / e2e tests (push) Failing after 2s
Run Tests / lint tests (push) Failing after 1m25s
Run Tests / unit tests (push) Failing after 1m21s
2025-10-27 12:51:58 +01:00
ee0a7a5bf5 fix: Add missing newlines for improved readability in test workflow
Some checks failed
Run Tests / e2e tests (push) Failing after 3s
Run Tests / unit tests (push) Has been cancelled
Run Tests / lint tests (push) Has been cancelled
2025-10-27 12:50:20 +01:00
ef4fb7dcf0 Refactor architecture documentation and enhance security features
Some checks failed
Run Tests / e2e tests (push) Failing after 1m20s
Run Tests / unit tests (push) Has been cancelled
Run Tests / lint tests (push) Has been cancelled
- Updated architecture constraints documentation to include detailed sections on technical, organizational, regulatory, environmental, and performance constraints.
- Created separate markdown files for each type of constraint for better organization and clarity.
- Revised the architecture scope section to provide a clearer overview of the system's key areas.
- Enhanced the solution strategy documentation with detailed explanations of the client-server architecture, technology choices, trade-offs, and future considerations.
- Added comprehensive descriptions of backend and frontend components, middleware, and utilities in the architecture documentation.
- Migrated UI, templates, and styling notes to a dedicated section for better structure.
- Updated requirements.txt to include missing dependencies.
- Refactored user authentication logic in the users.py and security.py files to improve code organization and maintainability, including the integration of OAuth2 password bearer token handling.
2025-10-27 12:46:51 +01:00
7f4cd33b65 fix: Update authentication system to use passlib for password hashing
Some checks failed
Run Tests / e2e tests (push) Failing after 1m25s
Run Tests / lint tests (push) Failing after 6s
Run Tests / unit tests (push) Failing after 5s
2025-10-27 10:57:27 +01:00
41156a87d1 fix: Ensure bcrypt and passlib are included in requirements.txt
Some checks failed
Run Tests / e2e tests (push) Failing after 1m26s
Run Tests / lint tests (push) Failing after 6s
Run Tests / unit tests (push) Failing after 7s
2025-10-27 10:46:34 +01:00
3fc6a2a9d3 feat: Add detailed component diagrams and architecture overviews to Building Block View documentation 2025-10-27 10:43:58 +01:00
f3da80885f fix: Remove duplicate playwright entry and reorder dependencies in requirements-test.txt
Some checks failed
Run Tests / e2e tests (push) Failing after 1m23s
Run Tests / lint tests (push) Failing after 5s
Run Tests / unit tests (push) Failing after 5s
2025-10-27 10:37:45 +01:00
97b1c0360b Refactor test cases for improved readability and consistency
Some checks failed
Run Tests / e2e tests (push) Failing after 1m27s
Run Tests / lint tests (push) Failing after 6s
Run Tests / unit tests (push) Failing after 7s
- Updated test functions in various test files to enhance code clarity by formatting long lines and improving indentation.
- Adjusted assertions to use multi-line formatting for better readability.
- Added new test cases for theme settings API to ensure proper functionality.
- Ensured consistent use of line breaks and spacing across test files for uniformity.
2025-10-27 10:32:55 +01:00
e8a86b15e4 feat: Enhance CI workflows by adding linting step, updating documentation, and configuring development dependencies 2025-10-27 08:54:11 +01:00
300ecebe23 Merge pull request 'fest/ci-improvement' (#3) from fest/ci-improvement into main
All checks were successful
Run Tests / e2e tests (push) Successful in 1m48s
Run Tests / unit tests (push) Successful in 10s
Reviewed-on: #3
2025-10-25 22:03:20 +02:00
70db34d088 feat: Implement composite action for Python environment setup and refactor test workflow to utilize it
All checks were successful
Run Tests / e2e tests (push) Successful in 1m48s
Run Tests / unit tests (push) Successful in 10s
2025-10-25 22:00:28 +02:00
0550928a2f feat: Update CI workflows for Docker image build and deployment, enhance test configurations, and add testing documentation
All checks were successful
Run Tests / e2e tests (push) Successful in 1m49s
Run Tests / unit tests (push) Successful in 11s
2025-10-25 21:28:49 +02:00
ec56099e2a Merge pull request 'feat/app-settings' (#2) from feat/app-settings into main
Some checks failed
Run Tests / test (push) Successful in 1m56s
Deploy to Server / deploy (push) Failing after 2s
Build and Push Docker Image / build-and-push (push) Successful in 1m2s
Reviewed-on: #2
2025-10-25 19:36:36 +02:00
117 changed files with 2732 additions and 3112 deletions

View File

@@ -10,6 +10,8 @@ venv/
.vscode .vscode
.git .git
.gitignore .gitignore
.gitea
.github
.DS_Store .DS_Store
dist dist
build build
@@ -17,5 +19,9 @@ build
*.sqlite3 *.sqlite3
.env .env
.env.* .env.*
.Dockerfile coverage/
.dockerignore logs/
backups/
tests/e2e/artifacts/
scripts/__pycache__/
reports/

View File

@@ -1,59 +0,0 @@
name: Build and Push Docker Image
on:
push:
branches:
- main
jobs:
build-and-push:
runs-on: ubuntu-latest
env:
DEFAULT_BRANCH: main
REGISTRY_ORG: allucanget
REGISTRY_IMAGE_NAME: calminer
REGISTRY_URL: ${{ secrets.REGISTRY_URL }}
REGISTRY_USERNAME: ${{ secrets.REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.REGISTRY_PASSWORD }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Collect workflow metadata
id: meta
shell: bash
run: |
ref_name="${GITHUB_REF_NAME:-${GITHUB_REF##*/}}"
event_name="${GITHUB_EVENT_NAME:-}"
sha="${GITHUB_SHA:-}"
if [ "$ref_name" = "${DEFAULT_BRANCH:-main}" ]; then
echo "on_default=true" >> "$GITHUB_OUTPUT"
else
echo "on_default=false" >> "$GITHUB_OUTPUT"
fi
echo "ref_name=$ref_name" >> "$GITHUB_OUTPUT"
echo "event_name=$event_name" >> "$GITHUB_OUTPUT"
echo "sha=$sha" >> "$GITHUB_OUTPUT"
- name: Set up QEMU and Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Gitea registry
if: ${{ steps.meta.outputs.on_default == 'true' }}
uses: docker/login-action@v3
continue-on-error: true
with:
registry: ${{ env.REGISTRY_URL }}
username: ${{ env.REGISTRY_USERNAME }}
password: ${{ env.REGISTRY_PASSWORD }}
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: .
file: Dockerfile
push: ${{ steps.meta.outputs.on_default == 'true' && steps.meta.outputs.event_name != 'pull_request' && (env.REGISTRY_URL != '' && env.REGISTRY_USERNAME != '' && env.REGISTRY_PASSWORD != '') }}
tags: |
${{ env.REGISTRY_URL }}/${{ env.REGISTRY_ORG }}/${{ env.REGISTRY_IMAGE_NAME }}:latest
${{ env.REGISTRY_URL }}/${{ env.REGISTRY_ORG }}/${{ env.REGISTRY_IMAGE_NAME }}:${{ steps.meta.outputs.sha }}

View File

@@ -0,0 +1,141 @@
name: CI
on:
push:
branches: [main, develop]
pull_request:
branches: [main, develop]
jobs:
test:
env:
APT_CACHER_NG: http://192.168.88.14:3142
DB_DRIVER: postgresql+psycopg2
DB_HOST: 192.168.88.35
DB_NAME: calminer_test
DB_USER: calminer
DB_PASSWORD: calminer_password
runs-on: ubuntu-latest
services:
postgres:
image: postgres:17
env:
POSTGRES_USER: ${{ env.DB_USER }}
POSTGRES_PASSWORD: ${{ env.DB_PASSWORD }}
POSTGRES_DB: ${{ env.DB_NAME }}
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Get pip cache dir
id: pip-cache
run: |
echo "path=$(pip cache dir)" >> $GITEA_OUTPUT
echo "Pip cache dir: $(pip cache dir)"
- name: Cache pip dependencies
uses: actions/cache@v4
with:
path: ${{ steps.pip-cache.outputs.path }}
key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt', 'requirements-test.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Update apt-cacher-ng config
run: |-
echo 'Acquire::http::Proxy "{{ env.APT_CACHER_NG }}";' | tee /etc/apt/apt.conf.d/01apt-cacher-ng
apt-get update
- name: Update system packages
run: apt-get upgrade -y
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-test.txt
- name: Install Playwright system dependencies
run: playwright install-deps
- name: Install Playwright browsers
run: playwright install
- name: Run tests
env:
DATABASE_DRIVER: ${{ env.DB_DRIVER }}
DATABASE_HOST: postgres
DATABASE_PORT: 5432
DATABASE_USER: ${{ env.DB_USER }}
DATABASE_PASSWORD: ${{ env.DB_PASSWORD }}
DATABASE_NAME: ${{ env.DB_NAME }}
run: |
pytest tests/ --cov=.
- name: Build Docker image
run: |
docker build -t calminer .
build:
runs-on: ubuntu-latest
needs: test
env:
DEFAULT_BRANCH: main
REGISTRY_URL: ${{ secrets.REGISTRY_URL }}
REGISTRY_USERNAME: ${{ secrets.REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.REGISTRY_PASSWORD }}
REGISTRY_CONTAINER_NAME: calminer
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Collect workflow metadata
id: meta
shell: bash
run: |
ref_name="${GITHUB_REF_NAME:-${GITHUB_REF##*/}}"
event_name="${GITHUB_EVENT_NAME:-}"
sha="${GITHUB_SHA:-}"
if [ "$ref_name" = "${DEFAULT_BRANCH:-main}" ]; then
echo "on_default=true" >> "$GITHUB_OUTPUT"
else
echo "on_default=false" >> "$GITHUB_OUTPUT"
fi
echo "ref_name=$ref_name" >> "$GITHUB_OUTPUT"
echo "event_name=$event_name" >> "$GITHUB_OUTPUT"
echo "sha=$sha" >> "$GITHUB_OUTPUT"
- name: Set up QEMU and Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to gitea registry
if: ${{ steps.meta.outputs.on_default == 'true' }}
uses: docker/login-action@v3
continue-on-error: true
with:
registry: ${{ env.REGISTRY_URL }}
username: ${{ env.REGISTRY_USERNAME }}
password: ${{ env.REGISTRY_PASSWORD }}
- name: Build and push image
uses: docker/build-push-action@v5
with:
context: .
file: Dockerfile
push: ${{ steps.meta.outputs.on_default == 'true' && steps.meta.outputs.event_name != 'pull_request' && (env.REGISTRY_URL != '' && env.REGISTRY_USERNAME != '' && env.REGISTRY_PASSWORD != '') }}
tags: |
${{ env.REGISTRY_URL }}/allucanget/${{ env.REGISTRY_CONTAINER_NAME }}:latest
${{ env.REGISTRY_URL }}/allucanget/${{ env.REGISTRY_CONTAINER_NAME }}:${{ steps.meta.outputs.sha }}

View File

@@ -1,36 +0,0 @@
name: Deploy to Server
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
env:
DEFAULT_BRANCH: main
REGISTRY_ORG: allucanget
REGISTRY_IMAGE_NAME: calminer
REGISTRY_URL: ${{ secrets.REGISTRY_URL }}
REGISTRY_USERNAME: ${{ secrets.REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.REGISTRY_PASSWORD }}
steps:
- name: SSH and deploy
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.SSH_HOST }}
username: ${{ secrets.SSH_USERNAME }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
docker pull ${{ env.REGISTRY_URL }}/${{ env.REGISTRY_ORG }}/${{ env.REGISTRY_IMAGE_NAME }}:latest
docker stop calminer || true
docker rm calminer || true
docker run -d --name calminer -p 8000:8000 \
-e DATABASE_DRIVER=${{ secrets.DATABASE_DRIVER }} \
-e DATABASE_HOST=${{ secrets.DATABASE_HOST }} \
-e DATABASE_PORT=${{ secrets.DATABASE_PORT }} \
-e DATABASE_USER=${{ secrets.DATABASE_USER }} \
-e DATABASE_PASSWORD=${{ secrets.DATABASE_PASSWORD }} \
-e DATABASE_NAME=${{ secrets.DATABASE_NAME }} \
-e DATABASE_SCHEMA=${{ secrets.DATABASE_SCHEMA }} \
${{ secrets.REGISTRY_URL }}/${{ secrets.REGISTRY_USERNAME }}/calminer:latest

View File

@@ -1,125 +0,0 @@
name: Run Tests
on: [push]
jobs:
test:
services:
postgres:
image: postgres:16-alpine
env:
POSTGRES_DB: calminer_ci
POSTGRES_USER: calminer
POSTGRES_PASSWORD: secret
ports:
- 5432:5432
options: >-
--health-cmd "pg_isready -U calminer -d calminer_ci"
--health-interval 10s
--health-timeout 5s
--health-retries 10
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Configure apt proxy
run: |
set -euo pipefail
PROXY_HOST="http://apt-cacher:3142"
if ! curl -fsS --connect-timeout 3 "${PROXY_HOST}" >/dev/null; then
PROXY_HOST="http://192.168.88.14:3142"
fi
echo "Using APT proxy ${PROXY_HOST}"
echo "http_proxy=${PROXY_HOST}" >> "$GITHUB_ENV"
echo "https_proxy=${PROXY_HOST}" >> "$GITHUB_ENV"
echo "HTTP_PROXY=${PROXY_HOST}" >> "$GITHUB_ENV"
echo "HTTPS_PROXY=${PROXY_HOST}" >> "$GITHUB_ENV"
sudo tee /etc/apt/apt.conf.d/01proxy >/dev/null <<EOF
Acquire::http::Proxy "${PROXY_HOST}";
Acquire::https::Proxy "${PROXY_HOST}";
EOF
# - name: Cache pip
# uses: actions/cache@v4
# with:
# path: ~/.cache/pip
# key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt', 'requirements-test.txt') }}
# restore-keys: |
# ${{ runner.os }}-pip-${{ hashFiles('requirements.txt') }}
# ${{ runner.os }}-pip-
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install -r requirements-test.txt
- name: Install Playwright browsers
run: |
python -m playwright install --with-deps
- name: Wait for database service
env:
DATABASE_DRIVER: postgresql
DATABASE_HOST: postgres
DATABASE_PORT: "5432"
DATABASE_NAME: calminer_ci
DATABASE_USER: calminer
DATABASE_PASSWORD: secret
DATABASE_SCHEMA: public
DATABASE_SUPERUSER: calminer
DATABASE_SUPERUSER_PASSWORD: secret
DATABASE_SUPERUSER_DB: calminer_ci
run: |
python - <<'PY'
import os
import time
import psycopg2
dsn = (
f"dbname={os.environ['DATABASE_SUPERUSER_DB']} "
f"user={os.environ['DATABASE_SUPERUSER']} "
f"password={os.environ['DATABASE_SUPERUSER_PASSWORD']} "
f"host={os.environ['DATABASE_HOST']} "
f"port={os.environ['DATABASE_PORT']}"
)
for attempt in range(30):
try:
with psycopg2.connect(dsn):
break
except psycopg2.OperationalError:
time.sleep(2)
else:
raise SystemExit("Postgres service did not become available")
PY
- name: Run database setup (dry run)
env:
DATABASE_DRIVER: postgresql
DATABASE_HOST: postgres
DATABASE_PORT: "5432"
DATABASE_NAME: calminer_ci
DATABASE_USER: calminer
DATABASE_PASSWORD: secret
DATABASE_SCHEMA: public
DATABASE_SUPERUSER: calminer
DATABASE_SUPERUSER_PASSWORD: secret
DATABASE_SUPERUSER_DB: calminer_ci
run: python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v
- name: Run database setup
env:
DATABASE_DRIVER: postgresql
DATABASE_HOST: postgres
DATABASE_PORT: "5432"
DATABASE_NAME: calminer_ci
DATABASE_USER: calminer
DATABASE_PASSWORD: secret
DATABASE_SCHEMA: public
DATABASE_SUPERUSER: calminer
DATABASE_SUPERUSER_PASSWORD: secret
DATABASE_SUPERUSER_DB: calminer_ci
run: python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v
- name: Run tests
env:
DATABASE_URL: postgresql+psycopg2://calminer:secret@postgres:5432/calminer_ci
DATABASE_SCHEMA: public
run: pytest

6
.gitignore vendored
View File

@@ -38,6 +38,9 @@ htmlcov/
# Mypy cache # Mypy cache
.mypy_cache/ .mypy_cache/
# Linting cache
.ruff_cache/
# Logs # Logs
*.log *.log
logs/ logs/
@@ -45,3 +48,6 @@ logs/
# SQLite database # SQLite database
*.sqlite3 *.sqlite3
test*.db test*.db
# Act runner files
.runner

8
.prettierrc Normal file
View File

@@ -0,0 +1,8 @@
{
"semi": true,
"singleQuote": true,
"trailingComma": "es5",
"printWidth": 80,
"tabWidth": 2,
"useTabs": false
}

View File

@@ -1,35 +1,111 @@
# Multi-stage Dockerfile to keep final image small # syntax=docker/dockerfile:1.7
FROM python:3.10-slim AS builder
# Install build-time packages and Python dependencies in one layer ARG PYTHON_VERSION=3.11-slim
WORKDIR /app ARG APT_CACHE_URL=http://192.168.88.14:3142
COPY requirements.txt /app/requirements.txt
RUN echo 'Acquire::http::Proxy "http://192.168.88.14:3142";' > /etc/apt/apt.conf.d/90proxy FROM python:${PYTHON_VERSION} AS builder
RUN apt-get update \ ARG APT_CACHE_URL
&& apt-get install -y --no-install-recommends build-essential gcc libpq-dev \
&& python -m pip install --upgrade pip \ ENV \
&& pip install --no-cache-dir --prefix=/install -r /app/requirements.txt \ PIP_DISABLE_PIP_VERSION_CHECK=1 \
&& apt-get purge -y --auto-remove build-essential gcc \ PIP_NO_CACHE_DIR=1 \
&& rm -rf /var/lib/apt/lists/* PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
FROM python:3.10-slim
WORKDIR /app WORKDIR /app
# Copy installed packages from builder COPY requirements.txt ./requirements.txt
COPY --from=builder /install /usr/local
# Assume environment variables for DB config will be set at runtime RUN --mount=type=cache,target=/root/.cache/pip /bin/bash <<'EOF'
# ENV DATABASE_HOST=your_db_host set -e
# ENV DATABASE_PORT=your_db_port
# ENV DATABASE_NAME=your_db_name python3 <<'PY'
# ENV DATABASE_USER=your_db_user import os, socket, urllib.parse
# ENV DATABASE_PASSWORD=your_db_password
url = os.environ.get('APT_CACHE_URL', '').strip()
if url:
parsed = urllib.parse.urlparse(url)
host = parsed.hostname
port = parsed.port or (80 if parsed.scheme == 'http' else 443)
if host:
sock = socket.socket()
sock.settimeout(1)
try:
sock.connect((host, port))
except OSError:
pass
else:
with open('/etc/apt/apt.conf.d/01proxy', 'w', encoding='utf-8') as fh:
fh.write(f"Acquire::http::Proxy \"{url}\";\n")
fh.write(f"Acquire::https::Proxy \"{url}\";\n")
finally:
sock.close()
PY
apt-get update
apt-get install -y --no-install-recommends build-essential gcc libpq-dev
pip install --upgrade pip
pip wheel --no-deps --wheel-dir /wheels -r requirements.txt
apt-get purge -y --auto-remove build-essential gcc
rm -rf /var/lib/apt/lists/*
EOF
FROM python:${PYTHON_VERSION} AS runtime
ARG APT_CACHE_URL
ENV \
PIP_DISABLE_PIP_VERSION_CHECK=1 \
PIP_NO_CACHE_DIR=1 \
PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \
PATH="/home/appuser/.local/bin:${PATH}"
WORKDIR /app
RUN groupadd --system app && useradd --system --create-home --gid app appuser
RUN /bin/bash <<'EOF'
set -e
python3 <<'PY'
import os, socket, urllib.parse
url = os.environ.get('APT_CACHE_URL', '').strip()
if url:
parsed = urllib.parse.urlparse(url)
host = parsed.hostname
port = parsed.port or (80 if parsed.scheme == 'http' else 443)
if host:
sock = socket.socket()
sock.settimeout(1)
try:
sock.connect((host, port))
except OSError:
pass
else:
with open('/etc/apt/apt.conf.d/01proxy', 'w', encoding='utf-8') as fh:
fh.write(f"Acquire::http::Proxy \"{url}\";\n")
fh.write(f"Acquire::https::Proxy \"{url}\";\n")
finally:
sock.close()
PY
apt-get update
apt-get install -y --no-install-recommends libpq5
rm -rf /var/lib/apt/lists/*
EOF
COPY --from=builder /wheels /wheels
COPY --from=builder /app/requirements.txt /tmp/requirements.txt
RUN pip install --upgrade pip \
&& pip install --no-cache-dir --find-links=/wheels -r /tmp/requirements.txt \
&& rm -rf /wheels /tmp/requirements.txt
# Copy application code
COPY . /app COPY . /app
# Expose service port RUN chown -R appuser:app /app
EXPOSE 8000
# Run the FastAPI app with uvicorn USER appuser
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
EXPOSE 8003
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8003", "--workers", "4"]

View File

@@ -6,24 +6,32 @@ Focuses on ore mining operations and covering parameters such as capital and ope
The system is designed to help mining companies make informed decisions by simulating various scenarios and analyzing potential outcomes based on stochastic variables. The system is designed to help mining companies make informed decisions by simulating various scenarios and analyzing potential outcomes based on stochastic variables.
A range of features are implemented to support these functionalities. ## Current Features
## Features > [!TIP]
> TODO: Update this section to reflect the current feature set.
| Feature | Category | Description | Status |
| ---------------------- | ----------- | ------------------------------------------------------------------------------------ | ----------- |
| Scenario Management | Core | Manage multiple mining scenarios with independent parameter sets and outputs. | Done |
| Parameter Definition | Core | Define and manage various parameters for each scenario. | Done |
| Cost Tracking | Financial | Capture and analyze capital and operational expenditures. | Done |
| Consumption Tracking | Operational | Record resource consumption tied to scenarios. | Done |
| Production Output | Operational | Store and analyze production metrics such as tonnage, recovery, and revenue drivers. | Done |
| Equipment Management | Operational | Manage equipment inventories and specifications for each scenario. | Done |
| Maintenance Logging | Operational | Log maintenance events and costs associated with equipment. | Started |
| Reporting Dashboard | Analytics | View aggregated statistics and visualizations for scenario outputs. | In Progress |
| Monte Carlo Simulation | Analytics | Run stochastic simulations to assess risk and variability in outcomes. | Started |
| Application Settings | Core | Manage global application settings such as themes and currency options. | Done |
## Key UI/UX Features
- **Scenario Management**: Manage multiple mining scenarios with independent parameter sets and outputs.
- **Process Parameters**: Define and persist process inputs via FastAPI endpoints and template-driven forms.
- **Cost Tracking**: Capture capital (`capex`) and operational (`opex`) expenditures per scenario.
- **Consumption Tracking**: Record resource consumption (chemicals, fuel, water, scrap) tied to scenarios.
- **Production Output**: Store production metrics such as tonnage, recovery, and revenue drivers.
- **Equipment Management**: Register scenario-specific equipment inventories.
- **Maintenance Logging**: Log maintenance events against equipment with dates and costs.
- **Reporting Dashboard**: Surface aggregated statistics for simulation outputs with an interactive Chart.js dashboard.
- **Unified UI Shell**: Server-rendered templates extend a shared base layout with a persistent left sidebar linking scenarios, parameters, costs, consumption, production, equipment, maintenance, simulations, and reporting views. - **Unified UI Shell**: Server-rendered templates extend a shared base layout with a persistent left sidebar linking scenarios, parameters, costs, consumption, production, equipment, maintenance, simulations, and reporting views.
- **Operations Overview Dashboard**: The root route (`/`) surfaces cross-scenario KPIs, charts, and maintenance reminders with a one-click refresh backed by aggregated loaders.
- **Theming Tokens**: Shared CSS variables in `static/css/main.css` centralize the UI color palette for consistent styling and rapid theme tweaks.
- **Settings Center**: The Settings landing page exposes visual theme controls and links to currency administration, backed by persisted application settings and environment overrides.
- **Modular Frontend Scripts**: Page-specific interactions in `static/js/` modules, keeping templates lean while enabling browser caching and reuse. - **Modular Frontend Scripts**: Page-specific interactions in `static/js/` modules, keeping templates lean while enabling browser caching and reuse.
- **Monte Carlo Simulation (in progress)**: Services and routes are scaffolded for future stochastic analysis.
## Planned Features
See [Roadmap](docs/roadmap.md) for details on planned features and enhancements.
## Documentation & quickstart ## Documentation & quickstart
@@ -45,47 +53,52 @@ The repository ships with a multi-stage `Dockerfile` that produces a slim runtim
### Build container ### Build container
```powershell ```bash
# Build the image locally docker build -t calminer .
docker build -t calminer:latest .
``` ```
### Push to registry ### Push to registry
```powershell To push the image to a registry, tag it appropriately and push:
# Tag and push the image to your registry
docker login your-registry.com -u your-username -p your-password ```bash
docker tag calminer:latest your-registry.com/your-namespace/calminer:latest docker tag calminer your-registry/calminer:latest
docker push your-registry.com/your-namespace/calminer:latest docker push your-registry/calminer:latest
``` ```
### Run container ### Run container
Expose FastAPI on <http://localhost:8000> with database configuration via granular environment variables: To run the container, ensure PostgreSQL is available and set environment variables:
```powershell ```bash
# Provide database configuration via granular environment variables docker run -p 8000:8000 \
docker run --rm -p 8000:8000 ^ -e DATABASE_HOST=your-postgres-host \
-e DATABASE_DRIVER="postgresql" ^ -e DATABASE_PORT=5432 \
-e DATABASE_HOST="db.host" ^ -e DATABASE_USER=calminer \
-e DATABASE_PORT="5432" ^ -e DATABASE_PASSWORD=your-password \
-e DATABASE_USER="calminer" ^ -e DATABASE_NAME=calminer_db \
-e DATABASE_PASSWORD="s3cret" ^ calminer
-e DATABASE_NAME="calminer" ^
-e DATABASE_SCHEMA="public" ^
calminer:latest
``` ```
### Orchestrated Deployment ## Development with Docker Compose
Use `docker compose` or an orchestrator of your choice to co-locate PostgreSQL/Redis alongside the app when needed. The image expects migrations to be applied before startup. For local development, use `docker-compose.yml` which includes the app and PostgreSQL services.
## CI/CD expectations ```bash
# Start services
docker-compose up
# Or run in background
docker-compose up -d
# Stop services
docker-compose down
```
The app will be available at `http://localhost:8000`, PostgreSQL at `localhost:5432`.
## CI/CD
CalMiner uses Gitea Actions workflows stored in `.gitea/workflows/`: CalMiner uses Gitea Actions workflows stored in `.gitea/workflows/`:
- `test.yml` runs style/unit/e2e suites on every push with cached Python dependencies. - `ci.yml`: Runs on push and PR to main/develop branches. Sets up Python, installs dependencies, runs tests with coverage, and builds the Docker image.
- `build-and-push.yml` builds the Docker image, reuses cached layers, and pushes to the configured registry.
- `deploy.yml` pulls the pushed image on the target host and restarts the container.
Pipelines assume the following secrets are provisioned in the Gitea instance: `REGISTRY_USERNAME`, `REGISTRY_PASSWORD`, `REGISTRY_URL`, `SSH_HOST`, `SSH_USERNAME`, and `SSH_PRIVATE_KEY`.

0
backups/.gitkeep Normal file
View File

View File

@@ -56,3 +56,11 @@ DATABASE_URL = _build_database_url()
engine = create_engine(DATABASE_URL, echo=True, future=True) engine = create_engine(DATABASE_URL, echo=True, future=True)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base() Base = declarative_base()
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()

View File

@@ -0,0 +1,35 @@
# Copy this file to config/setup_production.env and replace values with production secrets
# Container image and runtime configuration
CALMINER_IMAGE=registry.example.com/calminer/api:latest
CALMINER_DOMAIN=calminer.example.com
TRAEFIK_ACME_EMAIL=ops@example.com
CALMINER_API_PORT=8000
UVICORN_WORKERS=4
UVICORN_LOG_LEVEL=info
CALMINER_NETWORK=calminer_backend
API_LIMIT_CPUS=1.0
API_LIMIT_MEMORY=1g
API_RESERVATION_MEMORY=512m
TRAEFIK_LIMIT_CPUS=0.5
TRAEFIK_LIMIT_MEMORY=512m
POSTGRES_LIMIT_CPUS=1.0
POSTGRES_LIMIT_MEMORY=2g
POSTGRES_RESERVATION_MEMORY=1g
# Application database connection
DATABASE_DRIVER=postgresql+psycopg2
DATABASE_HOST=production-db.internal
DATABASE_PORT=5432
DATABASE_NAME=calminer
DATABASE_USER=calminer_app
DATABASE_PASSWORD=ChangeMe123!
DATABASE_SCHEMA=public
# Optional consolidated SQLAlchemy URL (overrides granular settings when set)
# DATABASE_URL=postgresql+psycopg2://calminer_app:ChangeMe123!@production-db.internal:5432/calminer
# Superuser credentials used by scripts/setup_database.py for migrations/seed data
DATABASE_SUPERUSER=postgres
DATABASE_SUPERUSER_PASSWORD=ChangeMeSuper123!
DATABASE_SUPERUSER_DB=postgres

View File

@@ -1,23 +0,0 @@
version: "3.9"
services:
postgres:
image: postgres:16-alpine
container_name: calminer_postgres_local
restart: unless-stopped
environment:
POSTGRES_DB: calminer_local
POSTGRES_USER: calminer
POSTGRES_PASSWORD: secret
ports:
- "5433:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U calminer -d calminer_local"]
interval: 10s
timeout: 5s
retries: 10
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:

36
docker-compose.yml Normal file
View File

@@ -0,0 +1,36 @@
version: "3.8"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "8003:8003"
environment:
- DATABASE_HOST=postgres
- DATABASE_PORT=5432
- DATABASE_USER=calminer
- DATABASE_PASSWORD=calminer_password
- DATABASE_NAME=calminer_db
- DATABASE_DRIVER=postgresql
depends_on:
- postgres
volumes:
- ./logs:/app/logs
restart: unless-stopped
postgres:
image: postgres:17
environment:
- POSTGRES_USER=calminer
- POSTGRES_PASSWORD=calminer_password
- POSTGRES_DB=calminer_db
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
volumes:
postgres_data:

View File

@@ -1,62 +0,0 @@
---
title: "01 — Introduction and Goals"
description: "System purpose, stakeholders, and high-level goals; project introduction and business/technical goals."
status: draft
---
# 01 — Introduction and Goals
## Purpose
CalMiner aims to provide a comprehensive platform for mining project scenario analysis, enabling stakeholders to make informed decisions based on data-driven insights.
## Stakeholders
- **Project Managers**: Require tools for scenario planning and risk assessment.
- **Data Analysts**: Need access to historical data and simulation results for analysis.
- **Executives**: Seek high-level insights and reporting for strategic decision-making.
## High-Level Goals
1. **Comprehensive Scenario Analysis**: Enable users to create and analyze multiple project scenarios to assess risks and opportunities.
2. **Data-Driven Decision Making**: Provide stakeholders with the insights needed to make informed decisions based on simulation results.
3. **User-Friendly Interface**: Ensure the platform is accessible and easy to use for all stakeholders, regardless of technical expertise.
## System Overview
FastAPI application that collects mining project inputs, persists scenario-specific records, and surfaces aggregated insights. The platform targets Monte Carlo driven planning, with deterministic CRUD features in place and simulation logic staged for future work.
Frontend components are server-rendered Jinja2 templates, with Chart.js powering the dashboard visualization. The backend leverages SQLAlchemy for ORM mapping to a PostgreSQL database.
### Runtime Flow
1. Users navigate to form templates or API clients to manage scenarios, parameters, and operational data.
2. FastAPI routers validate payloads with Pydantic models, then delegate to SQLAlchemy sessions for persistence.
3. Simulation runs (placeholder `services/simulation.py`) will consume stored parameters to emit iteration results via `/api/simulations/run`.
4. Reporting requests POST simulation outputs to `/api/reporting/summary`; the reporting service calculates aggregates (count, min/max, mean, median, percentiles, standard deviation, variance, and tail-risk metrics at the 95% confidence level).
5. `templates/Dashboard.html` fetches summaries, renders metric cards, and plots distribution charts with Chart.js for stakeholder review.
### Current implementation status (summary)
- Currency normalization, simulation scaffold, and reporting service exist; see [quickstart](../quickstart.md) for full status and migration instructions.
## MVP Features (migrated)
The following MVP features and priorities were defined during initial planning.
### Prioritized Features
1. **Scenario Creation and Management** (High Priority): Allow users to create, edit, and delete scenarios. Rationale: Core functionality for what-if analysis.
1. **Parameter Input and Validation** (High Priority): Input process parameters with validation. Rationale: Ensures data integrity for simulations.
1. **Monte Carlo Simulation Run** (High Priority): Execute simulations and store results. Rationale: Key differentiator for risk analysis.
1. **Basic Reporting** (Medium Priority): Display NPV, IRR, EBITDA from simulation results. Rationale: Essential for decision-making.
1. **Cost Tracking Dashboard** (Medium Priority): Visualize CAPEX and OPEX. Rationale: Helps monitor expenses.
1. **Consumption Monitoring** (Low Priority): Track resource consumption. Rationale: Useful for optimization.
1. **User Authentication** (Medium Priority): Basic login/logout. Rationale: Security for multi-user access.
1. **Export Results** (Low Priority): Export simulation data to CSV/PDF. Rationale: For external analysis.
### Rationale for Prioritization
- High: Core simulation and scenario features first.
- Medium: Reporting and auth for usability.
- Low: Nice-to-haves after basics.

View File

@@ -1,175 +0,0 @@
---
title: "02 — Architecture Constraints"
description: "Document imposed constraints: technical, organizational, regulatory, and environmental constraints that affect architecture decisions."
status: skeleton
---
# 02 — Architecture Constraints
## Technical Constraints
> e.g., choice of FastAPI, PostgreSQL, SQLAlchemy, Chart.js, Jinja2 templates.
The architecture of CalMiner is influenced by several technical constraints that shape its design and implementation:
1. **Framework Selection**: The choice of FastAPI as the web framework imposes constraints on how the application handles requests, routing, and middleware. FastAPI's asynchronous capabilities must be leveraged appropriately to ensure optimal performance.
2. **Database Technology**: The use of PostgreSQL as the primary database system dictates the data modeling, querying capabilities, and transaction management strategies. SQLAlchemy ORM is used for database interactions, which requires adherence to its conventions and limitations.
3. **Frontend Technologies**: The decision to use Jinja2 for server-side templating and Chart.js for data visualization influences the structure of the frontend code and the way dynamic content is rendered.
4. **Simulation Logic**: The Monte Carlo simulation logic must be designed to efficiently handle large datasets and perform computations within the constraints of the chosen programming language (Python) and its libraries.
## Organizational Constraints
> e.g., team skillsets, development workflows, CI/CD pipelines.
Restrictions arising from organizational factors include:
1. **Team Expertise**: The development teams familiarity with FastAPI, SQLAlchemy, and frontend technologies like Jinja2 and Chart.js influences the architecture choices to ensure maintainability and ease of development.
2. **Development Processes**: The adoption of Agile methodologies and CI/CD pipelines (using Gitea Actions) shapes the architecture to support continuous integration, automated testing, and deployment practices.
3. **Collaboration Tools**: The use of specific collaboration and version control tools (e.g., Gitea) affects how code is managed, reviewed, and integrated, impacting the overall architecture and development workflow.
4. **Documentation Standards**: The requirement for comprehensive documentation (as seen in the `docs/` folder) necessitates an architecture that is well-structured and easy to understand for both current and future team members.
5. **Knowledge Sharing**: The need for effective knowledge sharing and onboarding processes influences the architecture to ensure that it is accessible and understandable for new team members.
6. **Resource Availability**: The availability of hardware, software, and human resources within the organization can impose constraints on the architecture, affecting decisions related to scalability, performance, and feature implementation.
## Regulatory Constraints
> e.g., data privacy laws, industry standards.
Regulatory constraints that impact the architecture of CalMiner include:
1. **Data Privacy Compliance**: The architecture must ensure compliance with data privacy regulations such as GDPR or CCPA, which may dictate how user data is collected, stored, and processed.
2. **Industry Standards**: Adherence to industry-specific standards and best practices may influence the design of data models, security measures, and reporting functionalities.
3. **Auditability**: The system may need to incorporate logging and auditing features to meet regulatory requirements, affecting the architecture of data storage and access controls.
4. **Data Retention Policies**: Regulatory requirements regarding data retention and deletion may impose constraints on how long certain types of data can be stored, influencing database design and data lifecycle management.
5. **Security Standards**: Compliance with security standards (e.g., ISO/IEC 27001) may necessitate the implementation of specific security measures, such as encryption, access controls, and vulnerability management, which impact the overall architecture.
## Environmental Constraints
> e.g., deployment environments, cloud provider limitations.
Environmental constraints affecting the architecture include:
1. **Deployment Environments**: The architecture must accommodate various deployment environments (development, testing, production) with differing configurations and resource allocations.
2. **Cloud Provider Limitations**: If deployed on a specific cloud provider, the architecture may need to align with the provider's services, limitations, and best practices, such as using managed databases or specific container orchestration tools.
3. **Containerization**: The use of Docker for containerization imposes constraints on how the application is packaged, deployed, and scaled, influencing the architecture to ensure compatibility with container orchestration platforms.
4. **Scalability Requirements**: The architecture must be designed to scale efficiently based on anticipated load and usage patterns, considering the limitations of the chosen infrastructure.
## Performance Constraints
> e.g., response time requirements, scalability needs.
Current performance constraints include:
1. **Response Time Requirements**: The architecture must ensure that the system can respond to user requests within a specified time frame, which may impact design decisions related to caching, database queries, and API performance.
2. **Scalability Needs**: The system should be able to handle increased load and user traffic without significant degradation in performance, necessitating a scalable architecture that can grow with demand.
## Security Constraints
> e.g., authentication mechanisms, data encryption standards.
## Budgetary Constraints
> e.g., licensing costs, infrastructure budgets.
## Time Constraints
> e.g., project deadlines, release schedules.
## Interoperability Constraints
> e.g., integration with existing systems, third-party services.
## Maintainability Constraints
> e.g., code modularity, documentation standards.
## Usability Constraints
> e.g., user interface design principles, accessibility requirements.
## Data Constraints
> e.g., data storage formats, data retention policies.
## Deployment Constraints
> e.g., deployment environments, cloud provider limitations.
## Testing Constraints
> e.g., testing frameworks, test coverage requirements.
## Localization Constraints
> e.g., multi-language support, regional settings.
## Versioning Constraints
> e.g., API versioning strategies, backward compatibility.
## Monitoring Constraints
> e.g., logging standards, performance monitoring tools.
## Backup and Recovery Constraints
> e.g., data backup frequency, disaster recovery plans.
## Development Constraints
> e.g., coding languages, frameworks, libraries to be used or avoided.
## Collaboration Constraints
> e.g., communication tools, collaboration platforms.
## Documentation Constraints
> e.g., documentation tools, style guides.
## Training Constraints
> e.g., training programs, skill development initiatives.
## Support Constraints
> e.g., support channels, response time expectations.
## Legal Constraints
> e.g., compliance requirements, intellectual property considerations.
## Ethical Constraints
> e.g., ethical considerations in data usage, user privacy.
## Environmental Impact Constraints
> e.g., energy consumption considerations, sustainability goals.
## Innovation Constraints
> e.g., limitations on adopting new technologies, risk tolerance for experimentation.
## Cultural Constraints
> e.g., organizational culture, team dynamics affecting development practices.
## Stakeholder Constraints
> e.g., stakeholder expectations, communication preferences.
## Change Management Constraints
> e.g., processes for handling changes, version control practices.
## Resource Constraints
> e.g., availability of hardware, software, and human resources.
## Process Constraints
> e.g., development methodologies (Agile, Scrum), project management tools.
## Quality Constraints
> e.g., code quality standards, testing requirements.

View File

@@ -1,57 +0,0 @@
---
title: "03 — Context and Scope"
description: "Describe system context, external actors, and the scope of the architecture."
status: draft
---
# 03 — Context and Scope
## System Context
The CalMiner system operates within the context of mining project management, providing tools for scenario analysis and decision support. It interacts with various data sources, including historical project data and real-time operational metrics.
## External Actors
- **Project Managers**: Utilize the platform for scenario planning and risk assessment.
- **Data Analysts**: Analyze simulation results and derive insights.
- **Executives**: Review high-level reports and dashboards for strategic decision-making.
## Scope of the Architecture
The architecture encompasses the following key areas:
1. **Data Ingestion**: Mechanisms for collecting and processing data from various sources.
2. **Data Storage**: Solutions for storing and managing historical and real-time data.
3. **Simulation Engine**: Core algorithms and models for scenario analysis.
3.1. **Modeling Framework**: Tools for defining and managing simulation models.
3.2. **Parameter Management**: Systems for handling input parameters and configurations.
3.3. **Execution Engine**: Infrastructure for running simulations and processing results.
3.4. **Result Storage**: Systems for storing simulation outputs for analysis and reporting.
4. **Financial Reporting**: Tools for generating reports and visualizations based on simulation outcomes.
5. **Risk Assessment**: Frameworks for identifying and evaluating potential project risks.
6. **Profitability Analysis**: Modules for calculating and analyzing project profitability metrics.
7. **User Interface**: Design and implementation of the user-facing components of the system.
8. **Security and Compliance**: Measures to ensure data security and regulatory compliance.
9. **Scalability and Performance**: Strategies for ensuring the system can handle increasing data volumes and user loads.
10. **Integration Points**: Interfaces for integrating with external systems and services.
11. **Monitoring and Logging**: Systems for tracking system performance and user activity.
12. **Maintenance and Support**: Processes for ongoing system maintenance and user support.
## Diagram
```mermaid
sequenceDiagram
participant PM as Project Manager
participant DA as Data Analyst
participant EX as Executive
participant CM as CalMiner System
PM->>CM: Create and manage scenarios
DA->>CM: Analyze simulation results
EX->>CM: Review reports and dashboards
CM->>PM: Provide scenario planning tools
CM->>DA: Deliver analysis insights
CM->>EX: Generate high-level reports
```
This diagram illustrates the key components of the CalMiner system and their interactions with external actors.

View File

@@ -1,49 +0,0 @@
---
title: "04 — Solution Strategy"
description: "High-level solution strategy describing major approaches, technology choices, and trade-offs."
status: draft
---
# 04 — Solution Strategy
This section outlines the high-level solution strategy for implementing the CalMiner system, focusing on major approaches, technology choices, and trade-offs.
## Client-Server Architecture
- **Backend**: FastAPI serves as the backend framework, providing RESTful APIs for data management, simulation execution, and reporting. It leverages SQLAlchemy for ORM-based database interactions with PostgreSQL.
- **Frontend**: Server-rendered Jinja2 templates deliver dynamic HTML views, enhanced with Chart.js for interactive data visualizations. This approach balances performance and simplicity, avoiding the complexity of a full SPA.
- **Middleware**: Custom middleware handles JSON validation to ensure data integrity before processing requests.
## Technology Choices
- **FastAPI**: Chosen for its high performance, ease of use, and modern features like async support and automatic OpenAPI documentation.
- **PostgreSQL**: Selected for its robustness, scalability, and support for complex queries, making it suitable for handling the diverse data needs of mining project management.
- **SQLAlchemy**: Provides a flexible and powerful ORM layer, facilitating database interactions while maintaining code readability and maintainability.
- **Chart.js**: Utilized for its simplicity and effectiveness in rendering interactive charts, enhancing the user experience on the dashboard.
- **Jinja2**: Enables server-side rendering of HTML templates, allowing for dynamic content generation while keeping the frontend lightweight.
- **Pydantic**: Used for data validation and serialization, ensuring that incoming request payloads conform to expected schemas.
- **Docker**: Employed for containerization, ensuring consistent deployment across different environments and simplifying dependency management.
- **Redis**: Used as an in-memory data store to cache frequently accessed data, improving application performance and reducing database load.
## Trade-offs
- **Server-Rendered vs. SPA**: Opted for server-rendered templates over a single-page application (SPA) to reduce complexity and improve initial load times, at the cost of some interactivity.
- **Synchronous vs. Asynchronous**: While FastAPI supports async operations, the initial implementation focuses on synchronous request handling for simplicity, with plans to introduce async features as needed.
- **Monolithic vs. Microservices**: The initial architecture follows a monolithic approach for ease of development and deployment, with the possibility of refactoring into microservices as the system scales.
- **In-Memory Caching**: Implementing Redis for caching introduces additional infrastructure complexity but significantly enhances performance for read-heavy operations.
- **Database Choice**: PostgreSQL was chosen over NoSQL alternatives due to the structured nature of the data and the need for complex querying capabilities, despite potential scalability challenges.
- **Technology Familiarity**: Selected technologies align with the team's existing skill set to minimize the learning curve and accelerate development, even if some alternatives may offer marginally better performance or features.
- **Extensibility vs. Simplicity**: The architecture is designed to be extensible for future features (e.g., Monte Carlo simulation engine) while maintaining simplicity in the initial implementation to ensure timely delivery of core functionalities.
## Future Considerations
- **Scalability**: As the user base grows, consider transitioning to a microservices architecture and implementing load balancing strategies.
- **Asynchronous Processing**: Introduce asynchronous task queues (e.g., Celery) for long-running simulations to improve responsiveness.
- **Enhanced Frontend**: Explore the possibility of integrating a frontend framework (e.g., React or Vue.js) for more dynamic user interactions in future iterations.
- **Advanced Analytics**: Plan for integrating advanced analytics and machine learning capabilities to enhance simulation accuracy and reporting insights.
- **Security Enhancements**: Implement robust authentication and authorization mechanisms to protect sensitive data and ensure compliance with industry standards.
- **Continuous Integration/Continuous Deployment (CI/CD)**: Establish CI/CD pipelines to automate testing, building, and deployment processes for faster and more reliable releases.
- **Monitoring and Logging**: Integrate monitoring tools (e.g., Prometheus, Grafana) and centralized logging solutions (e.g., ELK stack) to track application performance and troubleshoot issues effectively.
- **User Feedback Loop**: Implement mechanisms for collecting user feedback to inform future development priorities and improve user experience.
- **Documentation**: Maintain comprehensive documentation for both developers and end-users to facilitate onboarding and effective use of the system.
- **Testing Strategy**: Develop a robust testing strategy, including unit, integration, and end-to-end tests, to ensure code quality and reliability as the system evolves.

View File

@@ -1,110 +0,0 @@
# Implementation Plan 2025-10-20
This file contains the implementation plan (MVP features, steps, and estimates).
## Project Setup
1. Connect to PostgreSQL database with schema `calminer`.
1. Create and activate a virtual environment and install dependencies via `requirements.txt`.
1. Define database environment variables in `.env` (e.g., `DATABASE_DRIVER`, `DATABASE_HOST`, `DATABASE_PORT`, `DATABASE_USER`, `DATABASE_PASSWORD`, `DATABASE_NAME`, optional `DATABASE_SCHEMA`).
1. Configure FastAPI entrypoint in `main.py` to include routers.
## Feature: Scenario Management
### Scenario Management — Steps
1. Create `models/scenario.py` for scenario CRUD.
1. Implement API endpoints in `routes/scenarios.py` (GET, POST, PUT, DELETE).
1. Write unit tests in `tests/unit/test_scenario.py`.
1. Build UI component `components/ScenarioForm.html`.
## Feature: Process Parameters
### Parameters — Steps
1. Create `models/parameters.py` for process parameters.
1. Implement Pydantic schemas in `routes/parameters.py`.
1. Add validation middleware in `middleware/validation.py`.
1. Write unit tests in `tests/unit/test_parameter.py`.
1. Build UI component `components/ParameterInput.html`.
## Feature: Stochastic Variables
### Stochastic Variables — Steps
1. Create `models/distribution.py` for variable distributions.
1. Implement API routes in `routes/distributions.py`.
1. Write Pydantic schemas and validations.
1. Write unit tests in `tests/unit/test_distribution.py`.
1. Build UI component `components/DistributionEditor.html`.
## Feature: Cost Tracking
### Cost Tracking — Steps
1. Create `models/capex.py` and `models/opex.py`.
1. Implement API routes in `routes/costs.py`.
1. Write Pydantic schemas for CAPEX/OPEX.
1. Write unit tests in `tests/unit/test_costs.py`.
1. Build UI component `components/CostForm.html`.
## Feature: Consumption Tracking
### Consumption Tracking — Steps
1. Create models for consumption: `chemical_consumption.py`, `fuel_consumption.py`, `water_consumption.py`, `scrap_consumption.py`.
1. Implement API routes in `routes/consumption.py`.
1. Write Pydantic schemas for consumption data.
1. Write unit tests in `tests/unit/test_consumption.py`.
1. Build UI component `components/ConsumptionDashboard.html`.
## Feature: Production Output
### Production Output — Steps
1. Create `models/production_output.py`.
1. Implement API routes in `routes/production.py`.
1. Write Pydantic schemas for production output.
1. Write unit tests in `tests/unit/test_production.py`.
1. Build UI component `components/ProductionChart.html`.
## Feature: Equipment Management
### Equipment Management — Steps
1. Create `models/equipment.py` for equipment data.
1. Implement API routes in `routes/equipment.py`.
1. Write Pydantic schemas for equipment.
1. Write unit tests in `tests/unit/test_equipment.py`.
1. Build UI component `components/EquipmentList.html`.
## Feature: Maintenance Logging
### Maintenance Logging — Steps
1. Create `models/maintenance.py` for maintenance events.
1. Implement API routes in `routes/maintenance.py`.
1. Write Pydantic schemas for maintenance logs.
1. Write unit tests in `tests/unit/test_maintenance.py`.
1. Build UI component `components/MaintenanceLog.html`.
## Feature: Monte Carlo Simulation Engine
### Monte Carlo Engine — Steps
1. Implement Monte Carlo logic in `services/simulation.py`.
1. Persist results in `models/simulation_result.py`.
1. Expose endpoint in `routes/simulations.py`.
1. Write integration tests in `tests/unit/test_simulation.py`.
1. Build UI component `components/SimulationRunner.html`.
## Feature: Reporting / Dashboard
### Reporting / Dashboard — Steps
1. Implement report calculations in `services/reporting.py`.
1. Add detailed and summary endpoints in `routes/reporting.py`.
1. Write unit tests in `tests/unit/test_reporting.py`.
1. Enhance UI in `components/Dashboard.html` with charts.
See [UI and Style](../13_ui_and_style.md) for the UI template audit, layout guidance, and next steps.

View File

@@ -1,62 +0,0 @@
---
title: "05 — Building Block View"
description: "Explain the static structure: modules, components, services and their relationships."
status: draft
---
<!-- markdownlint-disable-next-line MD025 -->
# 05 — Building Block View
## Architecture overview
This overview complements [architecture](README.md) with a high-level map of CalMiner's module layout and request flow.
Refer to the detailed architecture chapters in `docs/architecture/`:
- Module map & components: [Building Block View](05_building_block_view.md)
- Request flow & runtime interactions: [Runtime View](06_runtime_view.md)
- Simulation roadmap & strategy: [Solution Strategy](04_solution_strategy.md)
## System Components
### Backend
- **FastAPI application** (`main.py`): entry point that configures routers, middleware, and startup/shutdown events.
- **Routers** (`routes/`): modular route handlers for scenarios, parameters, costs, consumption, production, equipment, maintenance, simulations, and reporting. Each router defines RESTful endpoints, request/response schemas, and orchestrates service calls.
- leveraging a shared dependency module (`routes/dependencies.get_db`) for SQLAlchemy session management.
- **Models** (`models/`): SQLAlchemy ORM models representing database tables and relationships, encapsulating domain entities like Scenario, CapEx, OpEx, Consumption, ProductionOutput, Equipment, Maintenance, and SimulationResult.
- **Services** (`services/`): business logic layer that processes data, performs calculations, and interacts with models. Key services include reporting calculations and Monte Carlo simulation scaffolding.
- `services/settings.py`: manages application settings backed by the `application_setting` table, including CSS variable defaults, persistence, and environment-driven overrides that surface in both the API and UI.
- **Database** (`config/database.py`): sets up the SQLAlchemy engine and session management for PostgreSQL interactions.
### Frontend
- **Templates** (`templates/`): Jinja2 templates for server-rendered HTML views, extending a shared base layout with a persistent sidebar for navigation.
- **Static Assets** (`static/`): CSS and JavaScript files for styling and interactivity. Shared CSS variables in `static/css/main.css` define the color palette, while page-specific JS modules in `static/js/` handle dynamic behaviors.
- **Reusable partials** (`templates/partials/components.html`): macro library that standardises select inputs, feedback/empty states, and table wrappers so pages remain consistent while keeping DOM hooks stable for existing JavaScript modules.
- `templates/settings.html`: Settings hub that renders theme controls and environment override tables using metadata provided by `routes/ui.py`.
- `static/js/settings.js`: applies client-side validation, form submission, and live CSS updates for theme changes, respecting environment-managed variables returned by the API.
### Middleware & Utilities
- **Middleware** (`middleware/validation.py`): applies JSON validation before requests reach routers.
- **Testing** (`tests/unit/`): pytest suite covering route and service behavior, including UI rendering checks and negative-path router validation tests to ensure consistent HTTP error semantics. Playwright end-to-end coverage is planned for core smoke flows (dashboard load, scenario inputs, reporting) and will attach in CI once scaffolding is completed.
## Module Map (code)
- `scenario.py`: central scenario entity with relationships to cost, consumption, production, equipment, maintenance, and simulation results.
- `capex.py`, `opex.py`: financial expenditures tied to scenarios.
- `consumption.py`, `production_output.py`: operational data tables.
- `equipment.py`, `maintenance.py`: asset management models.
- `simulation_result.py`: stores Monte Carlo iteration outputs.
- `application_setting.py`: persists editable application configuration, currently focused on theme variables but designed to store future settings categories.
## Service Layer
- `reporting.py`: computes aggregates (count, min/max, mean, median, percentiles, standard deviation, variance, tail-risk metrics) from simulation results.
- `simulation.py`: scaffolds Monte Carlo simulation logic (currently in-memory; persistence planned).
- `currency.py`: handles currency normalization for cost tables.
- `utils.py`: shared helper functions (e.g., statistical calculations).
- `validation.py`: JSON schema validation middleware.
- `database.py`: SQLAlchemy engine and session setup.
- `dependencies.py`: FastAPI dependency injection for DB sessions.

View File

@@ -1,288 +0,0 @@
---
title: "06 — Runtime View"
description: "Describe runtime aspects: request flows, lifecycle of key interactions, and runtime components."
status: draft
---
# 06 — Runtime View
## Overview
The runtime view focuses on the dynamic behavior of the CalMiner application during execution. It illustrates how various components interact to fulfill user requests, process data, and generate outputs. Key runtime scenarios include scenario management, parameter input handling, cost tracking, consumption tracking, production output recording, equipment management, maintenance logging, Monte Carlo simulations, and reporting.
## Request Flow
1. **User Interaction**: A user interacts with the web application through the UI, triggering actions such as creating a scenario, inputting parameters, or generating reports.
2. **API Request**: The frontend sends HTTP requests (GET, POST, PUT, DELETE) to the appropriate API endpoints defined in the `routes/` directory.
3. **Routing**: The FastAPI framework routes the incoming requests to the corresponding route handlers.
4. **Service Layer**: Route handlers invoke services from the `services/` directory to process the business logic.
5. **Database Interaction**: Services interact with the database via ORM models defined in the `models/` directory to perform CRUD operations.
6. **Response Generation**: After processing, services return data to the route handlers, which format the response (JSON or HTML) and send it back to the frontend.
7. **UI Update**: The frontend updates the UI based on the response, rendering new data or updating existing views.
8. **Reporting Pipeline**: For reporting, data is aggregated from various sources, processed to generate statistics, and presented in the dashboard using Chart.js.
9. **Monte Carlo Simulations**: Stochastic simulations are executed in the backend, generating probabilistic outcomes that are stored temporarily and used for risk analysis in reports.
10. **Error Handling**: Throughout the process, error handling mechanisms ensure that exceptions are caught and appropriate responses are sent back to the user.
Request flow diagram:
```mermaid
sequenceDiagram
participant User
participant Frontend
participant API
participant Service
participant Database
User->>Frontend: Interact with UI
Frontend->>API: Send HTTP Request
API->>Service: Route to Handler
Service->>Database: Perform CRUD Operation
Database-->>Service: Return Data
Service-->>API: Return Processed Data
API-->>Frontend: Send Response
Frontend-->>User: Update UI
participant Reporting
Service->>Reporting: Aggregate Data
Reporting-->>Service: Return Report Data
Service-->>API: Return Report Response
API-->>Frontend: Send Report Data
Frontend-->>User: Render Report
participant Simulation
Service->>Simulation: Execute Monte Carlo Simulation
Simulation-->>Service: Return Simulation Results
Service-->>API: Return Simulation Data
API-->>Frontend: Send Simulation Data
Frontend-->>User: Display Simulation Results
```
## Key Runtime Scenarios
### Scenario Management
1. User accesses the scenario list via the UI.
2. The frontend sends a GET request to `/api/scenarios`.
3. The `ScenarioService` retrieves scenarios from the database.
4. The response is rendered in the UI.
5. For scenario creation, the user submits a form, triggering a POST request to `/api/scenarios`, which the `ScenarioService` processes to create a new scenario in the database.
6. The UI updates to reflect the new scenario.
Scenario management diagram:
```mermaid
sequenceDiagram
participant User
participant Frontend
participant API
participant ScenarioService
participant Database
User->>Frontend: Access Scenario List
Frontend->>API: GET /api/scenarios
API->>ScenarioService: Route to Handler
ScenarioService->>Database: Retrieve Scenarios
Database-->>ScenarioService: Return Scenarios
ScenarioService-->>API: Return Scenario Data
API-->>Frontend: Send Response
Frontend-->>User: Render Scenario List
User->>Frontend: Submit New Scenario Form
Frontend->>API: POST /api/scenarios
API->>ScenarioService: Route to Handler
ScenarioService->>Database: Create New Scenario
Database-->>ScenarioService: Confirm Creation
ScenarioService-->>API: Return New Scenario Data
API-->>Frontend: Send Response
Frontend-->>User: Update UI with New Scenario
```
### Process Parameter Input
1. User navigates to the parameter input form.
2. The frontend fetches existing parameters via a GET request to `/api/parameters`.
3. The `ParameterService` retrieves parameters from the database.
4. The response is rendered in the UI.
5. For parameter updates, the user submits a form, triggering a PUT request to `/api/parameters/:id`, which the `ParameterService` processes to update the parameter in the database.
6. The UI updates to reflect the changes.
Parameter input diagram:
```mermaid
sequenceDiagram
participant User
participant Frontend
participant API
participant ParameterService
participant Database
User->>Frontend: Navigate to Parameter Input Form
Frontend->>API: GET /api/parameters
API->>ParameterService: Route to Handler
ParameterService->>Database: Retrieve Parameters
Database-->>ParameterService: Return Parameters
ParameterService-->>API: Return Parameter Data
API-->>Frontend: Send Response
Frontend-->>User: Render Parameter Form
User->>Frontend: Submit Parameter Update Form
Frontend->>API: PUT /api/parameters/:id
API->>ParameterService: Route to Handler
ParameterService->>Database: Update Parameter
Database-->>ParameterService: Confirm Update
ParameterService-->>API: Return Updated Parameter Data
API-->>Frontend: Send Response
Frontend-->>User: Update UI with Updated Parameter
```
### Cost Tracking
1. User accesses the cost tracking view.
2. The frontend sends a GET request to `/api/costs` to fetch existing cost records.
3. The `CostService` retrieves cost data from the database.
4. The response is rendered in the UI.
5. For cost updates, the user submits a form, triggering a PUT request to `/api/costs/:id`, which the `CostService` processes to update the cost record in the database.
6. The UI updates to reflect the changes.
Cost tracking diagram:
```mermaid
sequenceDiagram
participant User
participant Frontend
participant API
participant CostService
participant Database
User->>Frontend: Access Cost Tracking View
Frontend->>API: GET /api/costs
API->>CostService: Route to Handler
CostService->>Database: Retrieve Cost Records
Database-->>CostService: Return Cost Data
CostService-->>API: Return Cost Data
API-->>Frontend: Send Response
Frontend-->>User: Render Cost Tracking View
User->>Frontend: Submit Cost Update Form
Frontend->>API: PUT /api/costs/:id
API->>CostService: Route to Handler
CostService->>Database: Update Cost Record
Database-->>CostService: Confirm Update
CostService-->>API: Return Updated Cost Data
API-->>Frontend: Send Response
Frontend-->>User: Update UI with Updated Cost Data
```
## Reporting Pipeline and UI Integration
1. **Data Sources**
- Scenario-linked calculations (costs, consumption, production) produce raw figures stored in dedicated tables (`capex`, `opex`, `consumption`, `production_output`).
- Monte Carlo simulations (currently transient) generate arrays of `{ "result": float }` tuples that the dashboard or downstream tooling passes directly to reporting endpoints.
2. **API Contract**
- `POST /api/reporting/summary` accepts a JSON array of result objects and validates shape through `_validate_payload` in `routes/reporting.py`.
- On success it returns a structured payload (`ReportSummary`) containing count, mean, median, min/max, standard deviation, and percentile values, all as floats.
3. **Service Layer**
- `services/reporting.generate_report` converts the sanitized payload into descriptive statistics using Pythons standard library (`statistics` module) to avoid external dependencies.
- The service remains stateless; no database read/write occurs, which keeps summary calculations deterministic and idempotent.
- Extended KPIs (surfaced in the API and dashboard):
- `variance`: population variance computed as the square of the population standard deviation.
- `percentile_5` and `percentile_95`: lower and upper tail interpolated percentiles for sensitivity bounds.
- `value_at_risk_95`: 5th percentile threshold representing the minimum outcome within a 95% confidence band.
- `expected_shortfall_95`: mean of all outcomes at or below the `value_at_risk_95`, highlighting tail exposure.
4. **UI Consumption**
- `templates/Dashboard.html` posts the user-provided dataset to the summary endpoint, renders metric cards for each field, and charts the distribution using Chart.js.
- `SUMMARY_FIELDS` now includes variance, 5th/10th/90th/95th percentiles, and tail-risk metrics (VaR/Expected Shortfall at 95%); tooltip annotations surface the tail metrics alongside the percentile line chart.
- Error handling surfaces HTTP failures inline so users can address malformed JSON or backend availability issues without leaving the page.
Reporting pipeline diagram:
```mermaid
sequenceDiagram
participant User
participant Frontend
participant API
participant ReportingService
User->>Frontend: Input Data for Reporting
Frontend->>API: POST /api/reporting/summary
API->>ReportingService: Route to Handler
ReportingService->>ReportingService: Validate Payload
ReportingService->>ReportingService: Compute Statistics
ReportingService-->>API: Return Report Summary
API-->>Frontend: Send Report Summary
Frontend-->>User: Render Report Metrics and Charts
```
## Monte Carlo Simulation Execution
1. User initiates a Monte Carlo simulation via the UI.
2. The frontend sends a POST request to `/api/simulations/run` with simulation parameters.
3. The `SimulationService` executes the Monte Carlo logic, generating stochastic results.
4. The results are temporarily stored and returned to the frontend.
5. The UI displays the simulation results and allows users to trigger reporting based on these outcomes.
6. The reporting pipeline processes the simulation results as described above.
7. Error handling ensures that any issues during simulation execution are communicated back to the user.
8. Monte Carlo simulation diagram:
```mermaid
sequenceDiagram
participant User
participant Frontend
participant API
participant SimulationService
User->>Frontend: Input Simulation Parameters
Frontend->>API: POST /api/simulations/run
API->>SimulationService: Route to Handler
SimulationService->>SimulationService: Execute Monte Carlo Logic
SimulationService-->>API: Return Simulation Results
API-->>Frontend: Send Simulation Results
Frontend-->>User: Render Simulation Results
```
## Error Handling
Throughout the runtime processes, error handling mechanisms are implemented to catch exceptions and provide meaningful feedback to users. Common error scenarios include:
- Invalid input data
- Database connection issues
- Simulation execution errors
- Reporting calculation failures
- API endpoint unavailability
- Timeouts during long-running operations
- Unauthorized access attempts
- Data validation failures
- Resource not found errors
Error handling diagram:
```mermaid
sequenceDiagram
participant User
participant Frontend
participant API
participant Service
User->>Frontend: Perform Action
Frontend->>API: Send Request
API->>Service: Route to Handler
Service->>Service: Process Request
alt Success
Service-->>API: Return Data
API-->>Frontend: Send Response
Frontend-->>User: Update UI
else Error
Service-->>API: Return Error
API-->>Frontend: Send Error Response
Frontend-->>User: Display Error Message
end
```

View File

@@ -1,103 +0,0 @@
---
title: "07 — Deployment View"
description: "Describe deployment topology, infrastructure components, and environments (dev/stage/prod)."
status: draft
---
<!-- markdownlint-disable-next-line MD025 -->
# 07 — Deployment View
## Deployment Topology
The CalMiner application is deployed using a multi-tier architecture consisting of the following layers:
1. **Client Layer**: This layer consists of web browsers that interact with the application through a user interface rendered by Jinja2 templates and enhanced with JavaScript (Chart.js for dashboards).
2. **Web Application Layer**: This layer hosts the FastAPI application, which handles API requests, business logic, and serves HTML templates. It communicates with the database layer for data persistence.
3. **Database Layer**: This layer consists of a PostgreSQL database that stores all application data, including scenarios, parameters, costs, consumption, production outputs, equipment, maintenance logs, and simulation results.
```mermaid
graph TD
A[Client Layer<br/>(Web Browsers)] --> B[Web Application Layer<br/>(FastAPI)]
B --> C[Database Layer<br/>(PostgreSQL)]
```
## Infrastructure Components
The infrastructure components for the application include:
- **Web Server**: Hosts the FastAPI application and serves API endpoints.
- **Database Server**: PostgreSQL database for persisting application data.
- **Static File Server**: Serves static assets such as CSS, JavaScript, and image files.
- **Reverse Proxy (optional)**: An Nginx or Apache server can be used as a reverse proxy.
- **Containerization**: Docker images are generated via the repository `Dockerfile`, using a multi-stage build to keep the final runtime minimal.
- **CI/CD Pipeline**: Automated pipelines (Gitea Actions) run tests, build/push Docker images, and trigger deployments.
- **Cloud Infrastructure (optional)**: The application can be deployed on cloud platforms.
```mermaid
graph TD
A[Web Server] --> B[Database Server]
A --> C[Static File Server]
A --> D[Reverse Proxy]
A --> E[Containerization]
A --> F[CI/CD Pipeline]
A --> G[Cloud Infrastructure]
```
## Environments
The application can be deployed in multiple environments to support development, testing, and production:
### Development Environment
The development environment is set up for local development and testing. It includes:
- Local PostgreSQL instance (docker compose recommended, script available at `docker-compose.postgres.yml`)
- FastAPI server running in debug mode
### Testing Environment
The testing environment is set up for automated testing and quality assurance. It includes:
- Staging PostgreSQL instance
- FastAPI server running in testing mode
- Automated test suite (e.g., pytest) for running unit and integration tests
### Production Environment
The production environment is set up for serving live traffic and includes:
- Production PostgreSQL instance
- FastAPI server running in production mode
- Load balancer (e.g., Nginx) for distributing incoming requests
- Monitoring and logging tools for tracking application performance
## Containerized Deployment Flow
The Docker-based deployment path aligns with the solution strategy documented in [04 — Solution Strategy](04_solution_strategy.md) and the CI practices captured in [14 — Testing & CI](14_testing_ci.md).
### Image Build
- The multi-stage `Dockerfile` installs dependencies in a builder layer (including system compilers and Python packages) and copies only the required runtime artifacts to the final image.
- Build arguments are minimal; database configuration is supplied at runtime via granular variables (`DATABASE_DRIVER`, `DATABASE_HOST`, `DATABASE_PORT`, `DATABASE_USER`, `DATABASE_PASSWORD`, `DATABASE_NAME`, optional `DATABASE_SCHEMA`). Secrets and configuration should be passed via environment variables or an orchestrator.
- The resulting image exposes port `8000` and starts `uvicorn main:app` (s. [README.md](../../README.md)).
### Runtime Environment
- For single-node deployments, run the container alongside PostgreSQL/Redis using Docker Compose or an equivalent orchestrator.
- A reverse proxy (e.g., Nginx) terminates TLS and forwards traffic to the container on port `8000`.
- Migrations must be applied prior to rolling out a new image; automation can hook into the deploy step to run `scripts/run_migrations.py`.
### CI/CD Integration
- Gitea Actions workflows reside under `.gitea/workflows/`.
- `test.yml` executes the pytest suite using cached pip dependencies.
- `build-and-push.yml` logs into the container registry, rebuilds the Docker image using GitHub Actions cache-backed layers, and pushes `latest` (and additional tags as required).
- `deploy.yml` connects to the target host via SSH, pulls the pushed tag, stops any existing container, and launches the new version.
- Required secrets: `REGISTRY_URL`, `REGISTRY_USERNAME`, `REGISTRY_PASSWORD`, `SSH_HOST`, `SSH_USERNAME`, `SSH_PRIVATE_KEY`.
- Extend these workflows when introducing staging/blue-green deployments; keep cross-links with [14 — Testing & CI](14_testing_ci.md) up to date.
## Integrations and Future Work (deployment-related)
- **Persistence of results**: `/api/simulations/run` currently returns in-memory results; next iteration should persist to `simulation_result` and reference scenarios.
- **Deployment**: implement infrastructure-as-code (e.g., Terraform/Ansible) to provision the hosting environment and maintain parity across dev/stage/prod.

View File

@@ -1,64 +0,0 @@
---
title: "08 — Concepts"
description: "Document key concepts, domain models, and terminology used throughout the architecture documentation."
status: draft
---
# 08 — Concepts
## Key Concepts
### Scenario
A `scenario` represents a distinct mining project configuration, encapsulating all relevant parameters, costs, consumption, production outputs, equipment, maintenance logs, and simulation results. Each scenario is independent, allowing users to model and analyze different mining strategies.
### Parameterization
Parameters are defined for each scenario to capture inputs such as resource consumption rates, production targets, cost factors, and equipment specifications. Parameters can have fixed values or be linked to probability distributions for stochastic simulations.
### Monte Carlo Simulation
The Monte Carlo simulation engine allows users to perform risk analysis by running multiple iterations of a scenario with varying input parameters based on defined probability distributions. This helps in understanding the range of possible outcomes and their associated probabilities.
## Domain Model
The domain model consists of the following key entities:
- `Scenario`: Represents a mining project configuration.
- `Parameter`: Input values for scenarios, which can be fixed or probabilistic.
- `Cost`: Tracks capital and operational expenditures.
- `Consumption`: Records resource usage.
- `ProductionOutput`: Captures production metrics.
- `Equipment`: Represents mining equipment associated with a scenario.
- `Maintenance`: Logs maintenance events for equipment.
- `SimulationResult`: Stores results from Monte Carlo simulations.
- `Distribution`: Defines probability distributions for stochastic parameters.
- `User`: Represents application users and their roles.
- `Report`: Generated reports summarizing scenario analyses.
- `Dashboard`: Visual representation of key performance indicators and metrics.
- `AuditLog`: Tracks changes and actions performed within the application.
- `Notification`: Alerts and messages related to scenario events and updates.
- `Tag`: Labels for categorizing scenarios and other entities.
- `Attachment`: Files associated with scenarios, such as documents or images.
- `Version`: Tracks different versions of scenarios and their configurations.
### Detailed Domain Models
See [Domain Models](08_concepts/08_01_domain_models.md) document for detailed class diagrams and entity relationships.
## Data Model Highlights
- `scenario`: central entity describing a mining scenario; owns relationships to cost, consumption, production, equipment, and maintenance tables.
- `capex`, `opex`: monetary tracking linked to scenarios.
- `consumption`: resource usage entries parameterized by scenario and description.
- `parameter`: scenario inputs with base `value` and optional distribution linkage via `distribution_id`, `distribution_type`, and JSON `distribution_parameters` to support simulation sampling.
- `production_output`: production metrics per scenario.
- `equipment` and `maintenance`: equipment inventory and maintenance events with dates/costs.
- `simulation_result`: staging table for future Monte Carlo outputs (not yet populated by `run_simulation`).
- `application_setting`: centralized key/value store for UI and system configuration, supporting typed values, categories, and editability flags so administrators can manage theme variables and future global options without code changes.
Foreign keys secure referential integrity between domain tables and their scenarios, enabling per-scenario analytics.
### Detailed Data Models
See [Data Models](08_concepts/08_02_data_models.md) document for detailed ER diagrams and table descriptions.

View File

@@ -1,106 +0,0 @@
# Data Models
## Data Model Highlights
- `scenario`: central entity describing a mining scenario; owns relationships to cost, consumption, production, equipment, and maintenance tables.
- `capex`, `opex`: monetary tracking linked to scenarios.
- `consumption`: resource usage entries parameterized by scenario and description.
- `parameter`: scenario inputs with base `value` and optional distribution linkage via `distribution_id`, `distribution_type`, and JSON `distribution_parameters` to support simulation sampling.
- `production_output`: production metrics per scenario.
- `equipment` and `maintenance`: equipment inventory and maintenance events with dates/costs.
- `simulation_result`: staging table for future Monte Carlo outputs (not yet populated by `run_simulation`).
Foreign keys secure referential integrity between domain tables and their scenarios, enabling per-scenario analytics.
## Schema Diagrams
```mermaid
erDiagram
SCENARIO ||--o{ CAPEX : has
SCENARIO ||--o{ OPEX : has
SCENARIO ||--o{ CONSUMPTION : has
SCENARIO ||--o{ PARAMETER : has
SCENARIO ||--o{ PRODUCTION_OUTPUT : has
SCENARIO ||--o{ EQUIPMENT : has
EQUIPMENT ||--o{ MAINTENANCE : has
SCENARIO ||--o{ SIMULATION_RESULT : has
SCENARIO {
int id PK
string name
string description
datetime created_at
datetime updated_at
}
CAPEX {
int id PK
int scenario_id FK
float amount
string description
datetime created_at
datetime updated_at
}
OPEX {
int id PK
int scenario_id FK
float amount
string description
datetime created_at
datetime updated_at
}
CONSUMPTION {
int id PK
int scenario_id FK
string resource_type
float quantity
string description
datetime created_at
datetime updated_at
}
PRODUCTION_OUTPUT {
int id PK
int scenario_id FK
float tonnage
float recovery_rate
float revenue
datetime created_at
datetime updated_at
}
EQUIPMENT {
int id PK
int scenario_id FK
string name
string type
datetime created_at
datetime updated_at
}
MAINTENANCE {
int id PK
int equipment_id FK
date maintenance_date
float cost
string description
datetime created_at
datetime updated_at
}
SIMULATION_RESULT {
int id PK
int scenario_id FK
json result_data
datetime created_at
datetime updated_at
}
PARAMETER {
int id PK
int scenario_id FK
string name
float value
int distribution_id FK
string distribution_type
json distribution_parameters
datetime created_at
datetime updated_at
}
```

View File

@@ -1,5 +0,0 @@
# 09 — Architecture Decisions
Status: skeleton
Record important architectural decisions, their rationale, and alternatives considered.

View File

@@ -1,5 +0,0 @@
# 10 — Quality Requirements
Status: skeleton
List non-functional requirements (performance, scalability, reliability, security) and measurable acceptance criteria.

View File

@@ -1,5 +0,0 @@
# 11 — Technical Risks
Status: skeleton
Document potential technical risks, mitigation strategies, and monitoring suggestions.

View File

@@ -1,5 +0,0 @@
# 12 — Glossary
Status: skeleton
Project glossary and definitions for domain-specific terms.

View File

@@ -1,85 +0,0 @@
# 13 — UI, templates and styling
Status: migrated
This chapter collects UI integration notes, reusable template components, styling audit points and per-page UI data/actions.
## Reusable Template Components
To reduce duplication across form-centric pages, shared Jinja macros live in `templates/partials/components.html`.
- `select_field(...)`: renders labeled `<select>` controls with consistent placeholder handling and optional preselection. Existing JavaScript modules continue to target the generated IDs, so template calls must pass the same identifiers (`consumption-form-scenario`, etc.).
- `feedback(...)` and `empty_state(...)`: wrap status messages in standard classes (`feedback`, `empty-state`) with optional `hidden` toggles so scripts can control visibility without reimplementing markup.
- `table_container(...)`: provides a semantic wrapper and optional heading around tabular content; the `{% call %}` body supplies the `<thead>`, `<tbody>`, and `<tfoot>` elements while the macro applies the `table-container` class and manages hidden state.
Pages like `templates/consumption.html` and `templates/costs.html` already consume these helpers to keep markup aligned while preserving existing JavaScript selectors.
Import macros via:
```jinja
{% from "partials/components.html" import select_field, feedback, table_container with context %}
```
## Styling Audit Notes (2025-10-21)
- **Spacing**: Panels (`section.panel`) sometimes lack consistent vertical rhythm between headings, form grids, and tables. Extra top/bottom margin utilities would help align content.
- **Typography**: Headings rely on browser defaults; font-size scale is uneven between `<h2>` and `<h3>`. Define explicit scale tokens (e.g., `--font-size-lg`) for predictable sizing.
- **Forms**: `.form-grid` uses fixed column gaps that collapse on small screens; introduce responsive grid rules to stack gracefully below ~768px.
- **Tables**: `.table-container` wrappers need overflow handling for narrow viewports; consider `overflow-x: auto` with padding adjustments.
- **Feedback/Empty states**: Messages use default font weight and spacing; a utility class for margin/padding would ensure consistent separation from forms or tables.
## Per-page data & actions
Short reference of per-page APIs and primary actions used by templates and scripts.
- Scenarios (`templates/ScenarioForm.html`):
- Data: `GET /api/scenarios/`
- Actions: `POST /api/scenarios/`
- Parameters (`templates/ParameterInput.html`):
- Data: `GET /api/scenarios/`, `GET /api/parameters/`
- Actions: `POST /api/parameters/`
- Costs (`templates/costs.html`):
- Data: `GET /api/costs/capex`, `GET /api/costs/opex`
- Actions: `POST /api/costs/capex`, `POST /api/costs/opex`
- Consumption (`templates/consumption.html`):
- Data: `GET /api/consumption/`
- Actions: `POST /api/consumption/`
- Production (`templates/production.html`):
- Data: `GET /api/production/`
- Actions: `POST /api/production/`
- Equipment (`templates/equipment.html`):
- Data: `GET /api/equipment/`
- Actions: `POST /api/equipment/`
- Maintenance (`templates/maintenance.html`):
- Data: `GET /api/maintenance/` (pagination support)
- Actions: `POST /api/maintenance/`, `PUT /api/maintenance/{id}`, `DELETE /api/maintenance/{id}`
- Simulations (`templates/simulations.html`):
- Data: `GET /api/scenarios/`, `GET /api/parameters/`
- Actions: `POST /api/simulations/run`
- Reporting (`templates/reporting.html` and `templates/Dashboard.html`):
- Data: `POST /api/reporting/summary` (accepts arrays of `{ "result": float }` objects)
- Actions: Trigger summary refreshes and export/download actions.
## UI Template Audit (2025-10-20)
- Existing HTML templates: `ScenarioForm.html`, `ParameterInput.html`, and `Dashboard.html` (reporting summary view).
- Coverage gaps remain for costs, consumption, production, equipment, maintenance, and simulation workflows—no dedicated templates yet.
- Shared layout primitives (navigation/header/footer) are absent; current pages duplicate boilerplate markup.
- Dashboard currently covers reporting metrics but should be wired to a central `/` route once the shared layout lands.
- Next steps: introduce a `base.html`, refactor existing templates to extend it, and scaffold placeholder pages for the remaining features.

View File

@@ -1,118 +0,0 @@
# 14 Testing, CI and Quality Assurance
This chapter centralizes the project's testing strategy, CI configuration, and quality targets.
## Overview
CalMiner uses a combination of unit, integration, and end-to-end tests to ensure quality.
### Frameworks
- Backend: pytest for unit and integration tests.
- Frontend: pytest with Playwright for E2E tests.
- Database: pytest fixtures with psycopg2 for DB tests.
### Test Types
- Unit Tests: Test individual functions/modules.
- Integration Tests: Test API endpoints and DB interactions.
- E2E Tests: Playwright for full user flows.
### CI/CD
- Use Gitea Actions for CI/CD; workflows live under `.gitea/workflows/`.
- `test.yml` runs on every push, provisions a temporary Postgres 16 service, waits for readiness, executes the setup script in dry-run and live modes, installs Playwright browsers, and finally runs the full pytest suite.
- `build-and-push.yml` builds the Docker image with `docker/build-push-action@v2`, reusing GitHub Actions cache-backed layers, and pushes to the Gitea registry.
- `deploy.yml` connects to the target host (via `appleboy/ssh-action`) to pull the freshly pushed image and restart the container.
- Mandatory secrets: `REGISTRY_USERNAME`, `REGISTRY_PASSWORD`, `REGISTRY_URL`, `SSH_HOST`, `SSH_USERNAME`, `SSH_PRIVATE_KEY`.
- Run tests on pull requests to shared branches; enforce coverage target ≥80% (pytest-cov).
### Running Tests
- Unit: `pytest tests/unit/`
- E2E: `pytest tests/e2e/`
- All: `pytest`
### Test Directory Structure
Organize tests under the `tests/` directory mirroring the application structure:
````text
tests/
unit/
test_<module>.py
e2e/
test_<flow>.py
fixtures/
conftest.py
```python
### Fixtures and Test Data
- Define reusable fixtures in `tests/fixtures/conftest.py`.
- Use temporary in-memory databases or isolated schemas for DB tests.
- Load sample data via fixtures for consistent test environments.
- Leverage the `seeded_ui_data` fixture in `tests/unit/conftest.py` to populate scenarios with related cost, maintenance, and simulation records for deterministic UI route checks.
### E2E (Playwright) Tests
The E2E test suite, located in `tests/e2e/`, uses Playwright to simulate user interactions in a live browser environment. These tests are designed to catch issues in the UI, frontend-backend integration, and overall application flow.
#### Fixtures
- `live_server`: A session-scoped fixture that launches the FastAPI application in a separate process, making it accessible to the browser.
- `playwright_instance`, `browser`, `page`: Standard `pytest-playwright` fixtures for managing the Playwright instance, browser, and individual pages.
#### Smoke Tests
- UI Page Loading: `test_smoke.py` contains a parameterized test that systematically navigates to all UI routes to ensure they load without errors, have the correct title, and display a primary heading.
- Form Submissions: Each major form in the application has a corresponding test file (e.g., `test_scenarios.py`, `test_costs.py`) that verifies: page loads, create item by filling the form, success message, and UI updates.
### Running E2E Tests
To run the Playwright tests:
```bash
pytest tests/e2e/
````
To run headed mode:
```bash
pytest tests/e2e/ --headed
```
### Mocking and Dependency Injection
- Use `unittest.mock` to mock external dependencies.
- Inject dependencies via function parameters or FastAPI's dependency overrides in tests.
### Code Coverage
- Install `pytest-cov` to generate coverage reports.
- Run with coverage: `pytest --cov --cov-report=term` (use `--cov-report=html` when visualizing hotspots).
- Target 95%+ overall coverage. Focus on historically low modules: `services/simulation.py`, `services/reporting.py`, `middleware/validation.py`, and `routes/ui.py`.
- Latest snapshot (2025-10-21): `pytest --cov=. --cov-report=term-missing` returns **91%** overall coverage.
### CI Integration
`test.yml` encapsulates the steps below:
- Check out the repository and set up Python 3.10.
- Configure the runner's apt proxy (if available), install project dependencies (requirements + test extras), and download Playwright browsers.
- Run `pytest` (extend with `--cov` flags when enforcing coverage).
> The pip cache step is temporarily disabled in `test.yml` until the self-hosted cache service is exposed (see `docs/ci-cache-troubleshooting.md`).
`build-and-push.yml` adds:
- Registry login using repository secrets.
- Docker image build/push with GHA cache storage (`cache-from/cache-to` set to `type=gha`).
`deploy.yml` handles:
- SSH into the deployment host.
- Pull the tagged image from the registry.
- Stop, remove, and relaunch the `calminer` container exposing port 8000.
When adding new workflows, mirror this structure to ensure secrets, caching, and deployment steps remain aligned with the production environment.

View File

@@ -1,77 +0,0 @@
# 15 Development Setup Guide
This document outlines the local development environment and steps to get the project running.
## Prerequisites
- Python (version 3.10+)
- PostgreSQL (version 13+)
- Git
## Clone and Project Setup
````powershell
# Clone the repository
git clone https://git.allucanget.biz/allucanget/calminer.git
cd calminer
```python
## Virtual Environment
```powershell
# Create and activate a virtual environment
python -m venv .venv
.\.venv\Scripts\Activate.ps1
```python
## Install Dependencies
```powershell
pip install -r requirements.txt
```python
## Database Setup
1. Create database user:
```sql
CREATE USER calminer_user WITH PASSWORD 'your_password';
````
1. Create database:
````sql
CREATE DATABASE calminer;
```python
## Environment Variables
1. Copy `.env.example` to `.env` at project root.
1. Edit `.env` to set database connection details:
```dotenv
DATABASE_DRIVER=postgresql
DATABASE_HOST=localhost
DATABASE_PORT=5432
DATABASE_USER=calminer_user
DATABASE_PASSWORD=your_password
DATABASE_NAME=calminer
DATABASE_SCHEMA=public
````
1. The application uses `python-dotenv` to load these variables. A legacy `DATABASE_URL` value is still accepted if the granular keys are omitted.
## Running the Application
````powershell
# Start the FastAPI server
uvicorn main:app --reload
```python
## Testing
```powershell
pytest
````
E2E tests use Playwright and a session-scoped `live_server` fixture that starts the app at `http://localhost:8001` for browser-driven tests.

View File

@@ -1,26 +0,0 @@
---
title: "CalMiner Architecture Documentation"
description: "arc42-based architecture documentation for the CalMiner project"
---
# Architecture documentation (arc42 mapping)
This folder mirrors the arc42 chapter structure (adapted to Markdown).
## Files
- [01 Introduction and Goals](01_introduction_and_goals.md)
- [02 Architecture Constraints](02_architecture_constraints.md)
- [03 Context and Scope](03_context_and_scope.md)
- [04 Solution Strategy](04_solution_strategy.md)
- [05 Building Block View](05_building_block_view.md)
- [06 Runtime View](06_runtime_view.md)
- [07 Deployment View](07_deployment_view.md)
- [08 Concepts](08_concepts.md)
- [09 Architecture Decisions](09_architecture_decisions.md)
- [10 Quality Requirements](10_quality_requirements.md)
- [11 Technical Risks](11_technical_risks.md)
- [12 Glossary](12_glossary.md)
- [13 UI and Style](13_ui_and_style.md)
- [14 Testing & CI](14_testing_ci.md)
- [15 Development Setup](15_development_setup.md)

View File

@@ -1,27 +0,0 @@
# CI Cache Troubleshooting
## Background
The test workflow (`.gitea/workflows/test.yml`) uses the `actions/cache` action to reuse the pip download cache located at `~/.cache/pip`. The cache key now hashes both `requirements.txt` and `requirements-test.txt` so the cache stays aligned with dependency changes.
## Current Observation
Recent CI runs report the following warning when the cache step executes:
```text
::warning::Failed to restore: getCacheEntry failed: connect ETIMEDOUT 172.17.0.5:40181
Cache not found for input keys: Linux-pip-<hash>, Linux-pip-
```
The timeout indicates the runner cannot reach the cache backend rather than a normal cache miss.
## Recommended Follow-Up
- Confirm that the Actions cache service is enabled for the CI environment (Gitea runners require the cache server URL to be provided via `ACTIONS_CACHE_URL` and `ACTIONS_RUNTIME_URL`).
- Verify network connectivity from the runner to the cache service endpoint and ensure required ports are open.
- After connectivity is restored, rerun the workflow to allow the cache to be populated and confirm subsequent runs restore the cache without warnings.
## Interim Guidance
- The workflow will proceed without cached dependencies, but package installs may take longer.
- Keep the cache step in place so it begins working automatically once the infrastructure is configured.

View File

@@ -1,31 +0,0 @@
# Setup Script Idempotency Audit (2025-10-25)
This note captures the current evaluation of idempotent behaviour for `scripts/setup_database.py` and outlines follow-up actions.
## Admin Tasks
- **ensure_database**: guarded by `SELECT 1 FROM pg_database`; re-runs safely. Failure mode: network issues or lack of privileges surface as psycopg2 errors without additional context.
- **ensure_role**: checks `pg_roles`, creates role if missing, reapplies grants each time. Subsequent runs execute grants again but PostgreSQL tolerates repeated grants.
- **ensure_schema**: uses `information_schema` guard and respects `--dry-run`; idempotent when schema is `public` or already present.
## Application Tasks
- **initialize_schema**: relies on SQLAlchemy `create_all(checkfirst=True)`; repeatable. Dry-run output remains descriptive.
- **run_migrations**: new baseline workflow applies `000_base.sql` once and records legacy scripts as applied. Subsequent runs detect the baseline in `schema_migrations` and skip reapplication.
## Seeding
- `seed_baseline_data` seeds currencies and measurement units with upsert logic. Verification now raises on missing data, preventing silent failures.
- Running `--seed-data` repeatedly performs `ON CONFLICT` updates, making the operation safe.
## Outstanding Risks
1. Baseline migration relies on legacy files being present when first executed; if removed beforehand, old entries are never marked. (Low risk given repository state.)
2. `ensure_database` and `ensure_role` do not wrap SQL execution errors with additional context beyond psycopg2 messages.
3. Baseline verification assumes migrations and seeding run in the same process; manual runs of `scripts/seed_data.py` without the baseline could still fail.
## Recommended Actions
- Add regression tests ensuring repeated executions of key CLI paths (`--run-migrations`, `--seed-data`) result in no-op behaviour after the first run.
- Extend logging/error handling for admin operations to provide clearer messages on repeated failures.
- Consider a preflight check when migrations directory lacks legacy files but baseline is pending, warning about potential drift.

View File

@@ -1,29 +0,0 @@
# Setup Script Logging Audit (2025-10-25)
The following observations capture current logging behaviour in `scripts/setup_database.py` and highlight areas requiring improved error handling and messaging.
## Connection Validation
- `validate_admin_connection` and `validate_application_connection` log entry/exit messages and raise `RuntimeError` with context if connection fails. This coverage is sufficient.
- `ensure_database` logs creation states but does not surface connection or SQL exceptions beyond the initial connection acquisition. When the inner `cursor.execute` calls fail, the exceptions bubble without contextual logging.
## Migration Runner
- Lists pending migrations and logs each application attempt.
- When the baseline is pending, the script logs whether it is a dry-run or live application and records legacy file marking. However, if `_apply_migration_file` raises an exception, the caller re-raises after logging the failure; there is no wrapping message guiding users toward manual cleanup.
- Legacy migration marking happens silently (just info logs). Failures during the insert into `schema_migrations` would currently propagate without added guidance.
## Seeding Workflow
- `seed_baseline_data` announces each seeding phase and skips verification in dry-run mode with a log breadcrumb.
- `_verify_seeded_data` warns about missing currencies/units and inactive defaults but does **not** raise errors, meaning CI can pass while the database is incomplete. There is no explicit log when verification succeeds.
- `_seed_units` logs when the `measurement_unit` table is missing, which is helpful, but the warning is the only feedback; no exception is raised.
## Suggested Enhancements
1. Wrap baseline application and legacy marking in `try/except` blocks that log actionable remediation steps before re-raising.
2. Promote seed verification failures (missing or inactive records) to exceptions so automated workflows fail fast; add success logs for clarity.
3. Add contextual logging around currency/measurement-unit insert failures, particularly around `execute_values` calls, to aid debugging malformed data.
4. Introduce structured logging (log codes or phases) for major steps (`CONNECT`, `MIGRATE`, `SEED`, `VERIFY`) to make scanning log files easier.
These findings inform the remaining TODO subtasks for enhanced error handling.

View File

@@ -1,53 +0,0 @@
# Consolidated Migration Baseline Plan
This note outlines the content and structure of the planned baseline migration (`scripts/migrations/000_base.sql`). The objective is to capture the currently required schema changes in a single idempotent script so that fresh environments only need to apply one SQL file before proceeding with incremental migrations.
## Guiding Principles
1. **Idempotent DDL**: Every `CREATE` or `ALTER` statement must tolerate repeated execution. Use `IF NOT EXISTS` guards or existence checks (`information_schema`) where necessary.
2. **Order of Operations**: Create reference tables first, then update dependent tables, finally enforce foreign keys and constraints.
3. **Data Safety**: Default data seeded by migrations should be minimal and in ASCII-only form to avoid encoding issues in various shells and CI logs.
4. **Compatibility**: The baseline must reflect the schema shape expected by the current SQLAlchemy models, API routes, and seeding scripts.
## Schema Elements to Include
### 1. `currency` Table
- Columns: `id SERIAL PRIMARY KEY`, `code VARCHAR(3) UNIQUE NOT NULL`, `name VARCHAR(128) NOT NULL`, `symbol VARCHAR(8)`, `is_active BOOLEAN NOT NULL DEFAULT TRUE`.
- Index: implicit via unique constraint on `code`.
- Seed rows matching `scripts.seed_data.CURRENCY_SEEDS` (ASCII-only symbols such as `USD$`, `CAD$`).
- Upsert logic using `ON CONFLICT (code) DO UPDATE` to keep names/symbols in sync when rerun.
### 2. Currency Integration for CAPEX/OPEX
- Add `currency_id INTEGER` columns with `IF NOT EXISTS` guards.
- Populate `currency_id` from legacy `currency_code` if the column exists.
- Default null `currency_id` values to the USD row, then `ALTER` to `SET NOT NULL`.
- Create `fk_capex_currency` and `fk_opex_currency` constraints with `ON DELETE RESTRICT` semantics.
- Drop legacy `currency_code` column if it exists (safe because new column holds data).
### 3. Measurement Metadata on Consumption/Production
- Ensure `consumption` and `production_output` tables have `unit_name VARCHAR(64)` and `unit_symbol VARCHAR(16)` columns with `IF NOT EXISTS` guards.
### 4. `measurement_unit` Reference Table
- Columns: `id SERIAL PRIMARY KEY`, `code VARCHAR(64) UNIQUE NOT NULL`, `name VARCHAR(128) NOT NULL`, `symbol VARCHAR(16)`, `unit_type VARCHAR(32) NOT NULL`, `is_active BOOLEAN NOT NULL DEFAULT TRUE`, `created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()`, `updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()`.
- Assume a simple trigger to maintain `updated_at` is deferred: automate via application layer later; for now, omit trigger.
- Seed rows matching `MEASUREMENT_UNIT_SEEDS` (ASCII names/symbols). Use `ON CONFLICT (code) DO UPDATE` to keep descriptive fields aligned.
### 5. Transaction Handling
- Wrap the main operations in a single `BEGIN; ... COMMIT;` block.
- Use subtransactions (`DO $$ ... $$;`) only where conditional logic is required (e.g., checking column existence before backfill).
## Migration Tracking Alignment
- Baseline file will be named `000_base.sql`. After execution, insert a row into `schema_migrations` with filename `000_base.sql` to keep the tracking table aligned.
- Existing migrations (`20251021_add_currency_and_unit_fields.sql`, `20251022_create_currency_table_and_fks.sql`) remain for historical reference but will no longer be applied to new environments once the baseline is present.
## Next Steps
1. Draft `000_base.sql` reflecting the steps above.
2. Update `run_migrations` to recognise the baseline file and mark older migrations as applied when the baseline exists.
3. Provide documentation in `docs/quickstart.md` explaining how to reset an environment using the baseline plus seeds.

View File

@@ -1,253 +0,0 @@
# Quickstart & Expanded Project Documentation
This document contains the expanded development, usage, testing, and migration guidance moved out of the top-level README for brevity.
## Development
To get started locally:
```powershell
# Clone the repository
git clone https://git.allucanget.biz/allucanget/calminer.git
cd calminer
# Create and activate a virtual environment
python -m venv .venv
.\.venv\Scripts\Activate.ps1
# Install dependencies
pip install -r requirements.txt
# Start the development server
uvicorn main:app --reload
```
## Docker-based setup
To build and run the application using Docker instead of a local Python environment:
```powershell
# Build the application image (multi-stage build keeps runtime small)
docker build -t calminer:latest .
# Start the container on port 8000
docker run --rm -p 8000:8000 calminer:latest
# Supply environment variables (e.g., Postgres connection)
docker run --rm -p 8000:8000 ^
-e DATABASE_DRIVER="postgresql" ^
-e DATABASE_HOST="db.host" ^
-e DATABASE_PORT="5432" ^
-e DATABASE_USER="calminer" ^
-e DATABASE_PASSWORD="s3cret" ^
-e DATABASE_NAME="calminer" ^
-e DATABASE_SCHEMA="public" ^
calminer:latest
```
If you maintain a Postgres or Redis dependency locally, consider authoring a `docker compose` stack that pairs them with the app container. The Docker image expects the database to be reachable and migrations executed before serving traffic.
## Usage Overview
- **API base URL**: `http://localhost:8000/api`
- Key routes include creating scenarios, parameters, costs, consumption, production, equipment, maintenance, and reporting summaries. See the `routes/` directory for full details.
### Theme configuration
- Open `/ui/settings` to access the Settings dashboard. The **Theme Colors** form lists every CSS variable persisted in the `application_setting` table. Updates apply immediately across the UI once saved.
- Use the accompanying API endpoints for automation or integration tests:
- `GET /api/settings/css` returns the active variables, defaults, and metadata describing any environment overrides.
- `PUT /api/settings/css` accepts a payload such as `{"variables": {"--color-primary": "#112233"}}` and persists the change unless an environment override is in place.
- Environment variables prefixed with `CALMINER_THEME_` win over database values. For example, setting `CALMINER_THEME_COLOR_PRIMARY="#112233"` renders the corresponding input read-only and surfaces the override in the Environment Overrides table.
- Acceptable values include hex (`#rrggbb` or `#rrggbbaa`), `rgb()/rgba()`, and `hsl()/hsla()` expressions with the expected number of components. Invalid inputs trigger a validation error and the API responds with HTTP 422.
## Dashboard Preview
1. Start the FastAPI server and navigate to `/`.
2. Review the headline metrics, scenario snapshot table, and cost/activity charts sourced from the current database state.
3. Use the "Refresh Dashboard" button to pull freshly aggregated data via `/ui/dashboard/data` without reloading the page.
## Testing
Run the unit test suite:
```powershell
pytest
```
E2E tests use Playwright and a session-scoped `live_server` fixture that starts the app at `http://localhost:8001` for browser-driven tests.
## Migrations & Baseline
A consolidated baseline migration (`scripts/migrations/000_base.sql`) captures all schema changes required for a fresh installation. The script is idempotent: it creates the `currency` and `measurement_unit` reference tables, provisions the `application_setting` store for configurable UI/system options, ensures consumption and production records expose unit metadata, and enforces the foreign keys used by CAPEX and OPEX.
Configure granular database settings in your PowerShell session before running migrations:
```powershell
$env:DATABASE_DRIVER = 'postgresql'
$env:DATABASE_HOST = 'localhost'
$env:DATABASE_PORT = '5432'
$env:DATABASE_USER = 'calminer'
$env:DATABASE_PASSWORD = 's3cret'
$env:DATABASE_NAME = 'calminer'
$env:DATABASE_SCHEMA = 'public'
python scripts/setup_database.py --run-migrations --seed-data --dry-run
python scripts/setup_database.py --run-migrations --seed-data
```
The dry-run invocation reports which steps would execute without making changes. The live run applies the baseline (if not already recorded in `schema_migrations`) and seeds the reference data relied upon by the UI and API.
> When `--seed-data` is supplied without `--run-migrations`, the bootstrap script automatically applies any pending SQL migrations first so the `application_setting` table (and future settings-backed features) are present before seeding.
> The application still accepts `DATABASE_URL` as a fallback if the granular variables are not set.
## Database bootstrap workflow
Provision or refresh a database instance with `scripts/setup_database.py`. Populate the required environment variables (an example lives at `config/setup_test.env.example`) and run:
```powershell
# Load test credentials (PowerShell)
Get-Content .\config\setup_test.env.example |
ForEach-Object {
if ($_ -and -not $_.StartsWith('#')) {
$name, $value = $_ -split '=', 2
Set-Item -Path Env:$name -Value $value
}
}
# Dry-run to inspect the planned actions
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v
# Execute the full workflow
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v
```
Typical log output confirms:
- Admin and application connections succeed for the supplied credentials.
- Database and role creation are idempotent (`already present` when rerun).
- SQLAlchemy metadata either reports missing tables or `All tables already exist`.
- Migrations list pending files and finish with `Applied N migrations` (a new database reports `Applied 1 migrations` for `000_base.sql`).
After a successful run the target database contains all application tables plus `schema_migrations`, and that table records each applied migration file. New installations only record `000_base.sql`; upgraded environments retain historical entries alongside the baseline.
### Local Postgres via Docker Compose
For local validation without installing Postgres directly, use the provided compose file:
```powershell
docker compose -f docker-compose.postgres.yml up -d
```
#### Summary
1. Start the Postgres container with `docker compose -f docker-compose.postgres.yml up -d`.
2. Export the granular database environment variables (host `127.0.0.1`, port `5433`, database `calminer_local`, user/password `calminer`/`secret`).
3. Run the setup script twice: first with `--dry-run` to preview actions, then without it to apply changes.
4. When finished, stop and optionally remove the container/volume using `docker compose -f docker-compose.postgres.yml down`.
The service exposes Postgres 16 on `localhost:5433` with database `calminer_local` and role `calminer`/`secret`. When the container is running, set the granular environment variables before invoking the setup script:
```powershell
$env:DATABASE_DRIVER = 'postgresql'
$env:DATABASE_HOST = '127.0.0.1'
$env:DATABASE_PORT = '5433'
$env:DATABASE_USER = 'calminer'
$env:DATABASE_PASSWORD = 'secret'
$env:DATABASE_NAME = 'calminer_local'
$env:DATABASE_SCHEMA = 'public'
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v
```
When testing is complete, shut down the container (and optional persistent volume) with:
```powershell
docker compose -f docker-compose.postgres.yml down
docker volume rm calminer_postgres_local_postgres_data # optional cleanup
```
Document successful runs (or issues encountered) in `.github/instructions/DONE.TODO.md` for future reference.
### Seeding reference data
`scripts/seed_data.py` provides targeted control over the baseline datasets when the full setup script is not required:
```powershell
python scripts/seed_data.py --currencies --units --dry-run
python scripts/seed_data.py --currencies --units
```
The seeder upserts the canonical currency catalog (`USD`, `EUR`, `CLP`, `RMB`, `GBP`, `CAD`, `AUD`) using ASCII-safe symbols (`USD$`, `EUR`, etc.) and the measurement units referenced by the UI (`tonnes`, `kilograms`, `pounds`, `liters`, `cubic_meters`, `kilowatt_hours`). The setup script invokes the same seeder when `--seed-data` is provided and verifies the expected rows afterward, warning if any are missing or inactive.
### Rollback guidance
`scripts/setup_database.py` now tracks compensating actions when it creates the database or application role. If a later step fails, the script replays those rollback actions (dropping the newly created database or role and revoking grants) before exiting. Dry runs never register rollback steps and remain read-only.
If the script reports that some rollback steps could not complete—for example because a connection cannot be established—rerun the script with `--dry-run` to confirm the desired end state and then apply the outstanding cleanup manually:
```powershell
python scripts/setup_database.py --ensure-database --ensure-role --dry-run -v
# Manual cleanup examples when automation cannot connect
psql -d postgres -c "DROP DATABASE IF EXISTS calminer"
psql -d postgres -c "DROP ROLE IF EXISTS calminer"
```
After a failure and rollback, rerun the full setup once the environment issues are resolved.
### CI pipeline environment
The `.gitea/workflows/test.yml` job spins up a temporary PostgreSQL 16 container and runs the setup script twice: once with `--dry-run` to validate the plan and again without it to apply migrations and seeds. No external secrets are required; the workflow sets the following environment variables for both invocations and for pytest:
| Variable | Value | Purpose |
| --- | --- | --- |
| `DATABASE_DRIVER` | `postgresql` | Signals the driver to the setup script |
| `DATABASE_HOST` | `postgres` | Hostname of the Postgres job service container |
| `DATABASE_PORT` | `5432` | Default service port |
| `DATABASE_NAME` | `calminer_ci` | Target database created by the workflow |
| `DATABASE_USER` | `calminer` | Application role used during tests |
| `DATABASE_PASSWORD` | `secret` | Password for both admin and app role |
| `DATABASE_SCHEMA` | `public` | Default schema for the tests |
| `DATABASE_SUPERUSER` | `calminer` | Setup script uses the same role for admin actions |
| `DATABASE_SUPERUSER_PASSWORD` | `secret` | Matches the Postgres service password |
| `DATABASE_SUPERUSER_DB` | `calminer_ci` | Database to connect to for admin operations |
The workflow also updates `DATABASE_URL` for pytest to point at the CI Postgres instance. Existing tests continue to work unchanged, since SQLAlchemy reads the URL exactly as it does locally.
Because the workflow provisions everything inline, no repository or organization secrets need to be configured for basic CI runs. If you later move the setup step to staging or production pipelines, replace these inline values with secrets managed by the CI platform. When running on self-hosted runners behind an HTTP proxy or apt cache, ensure Playwright dependencies and OS packages inherit the same proxy settings that the workflow configures prior to installing browsers.
### Staging environment workflow
Use the staging checklist in `docs/staging_environment_setup.md` when running the setup script against the shared environment. A sample variable file (`config/setup_staging.env`) records the expected inputs (host, port, admin/application roles); copy it outside the repository or load the values securely via your shell before executing the workflow.
Recommended execution order:
1. Dry run with `--dry-run -v` to confirm connectivity and review planned operations. Capture the output to `reports/setup_staging_dry_run.log` (or similar) for auditing.
2. Execute the live run with the same flags minus `--dry-run` to provision the database, role grants, migrations, and seed data. Save the log as `reports/setup_staging_apply.log`.
3. Repeat the dry run to verify idempotency and record the result (for example `reports/setup_staging_post_apply.log`).
Record any issues in `.github/instructions/TODO.md` or `.github/instructions/DONE.TODO.md` as appropriate so the team can track follow-up actions.
## Database Objects
The database contains tables such as `capex`, `opex`, `chemical_consumption`, `fuel_consumption`, `water_consumption`, `scrap_consumption`, `production_output`, `equipment_operation`, `ore_batch`, `exchange_rate`, and `simulation_result`.
## Current implementation status (2025-10-21)
- Currency normalization: a `currency` table and backfill scripts exist; routes accept `currency_id` and `currency_code` for compatibility.
- Simulation engine: scaffolding in `services/simulation.py` and `/api/simulations/run` return in-memory results; persistence to `models/simulation_result` is planned.
- Reporting: `services/reporting.py` provides summary statistics used by `POST /api/reporting/summary`.
- Tests & coverage: unit and E2E suites exist; recent local coverage is >90%.
- Remaining work: authentication, persist simulation runs, CI/CD and containerization.
## Where to look next
- Architecture overview & chapters: [architecture](architecture/README.md) (per-chapter files under `docs/architecture/`)
- [Testing & CI](architecture/14_testing_ci.md)
- [Development setup](architecture/15_development_setup.md)
- Implementation plan & roadmap: [Solution strategy](architecture/04_solution_strategy.md)
- Routes: [routes](../routes/)
- Services: [services](../services/)
- Scripts: [scripts](../scripts/) (migrations and backfills)

View File

@@ -1,78 +0,0 @@
# Baseline Seed Data Plan
This document captures the datasets that should be present in a fresh CalMiner installation and the structure required to manage them through `scripts/seed_data.py`.
## Currency Catalog
The `currency` table already exists and is seeded today via `scripts/seed_data.py`. The goal is to keep the canonical list in one place and ensure the default currency (USD) is always active.
| Code | Name | Symbol | Notes |
| ---- | ------------------- | ------ | ---------------------------------------- |
| USD | US Dollar | $ | Default currency (`DEFAULT_CURRENCY_CODE`) |
| EUR | Euro | EUR symbol | |
| CLP | Chilean Peso | $ | |
| RMB | Chinese Yuan | RMB symbol | |
| GBP | British Pound | GBP symbol | |
| CAD | Canadian Dollar | $ | |
| AUD | Australian Dollar | $ | |
Seeding behaviour:
- Upsert by ISO code; keep existing name/symbol when updated manually.
- Ensure `is_active` remains true for USD and defaults to true for new rows.
- Defer to runtime validation in `routes.currencies` for enforcing default behaviour.
## Measurement Units
UI routes (`routes/ui.py`) currently rely on the in-memory `MEASUREMENT_UNITS` list to populate dropdowns for consumption and production forms. To make this configurable and available to the API, introduce a dedicated `measurement_unit` table and seed it.
Proposed schema:
| Column | Type | Notes |
| ------------- | -------------- | ------------------------------------ |
| id | SERIAL / BIGINT | Primary key. |
| code | TEXT | Stable slug (e.g. `tonnes`). Unique. |
| name | TEXT | Display label. |
| symbol | TEXT | Short symbol (nullable). |
| unit_type | TEXT | Category (`mass`, `volume`, `energy`).|
| is_active | BOOLEAN | Default `true` for soft disabling. |
| created_at | TIMESTAMP | Optional `NOW()` default. |
| updated_at | TIMESTAMP | Optional `NOW()` trigger/default. |
Initial seed set (mirrors existing UI list plus type categorisation):
| Code | Name | Symbol | Unit Type |
| --------------- | ---------------- | ------ | --------- |
| tonnes | Tonnes | t | mass |
| kilograms | Kilograms | kg | mass |
| pounds | Pounds | lb | mass |
| liters | Liters | L | volume |
| cubic_meters | Cubic Meters | m3 | volume |
| kilowatt_hours | Kilowatt Hours | kWh | energy |
Seeding behaviour:
- Upsert rows by `code`.
- Preserve `unit_type` and `symbol` unless explicitly changed via administration tooling.
- Continue surfacing unit options to the UI by querying this table instead of the static constant.
## Default Settings
The application expects certain defaults to exist:
- **Default currency**: enforced by `routes.currencies._ensure_default_currency`; ensure seeds keep USD active.
- **Fallback measurement unit**: UI currently auto-selects the first option in the list. Once units move to the database, expose an application setting to choose a fallback (future work tracked under "Application Settings management").
## Seeding Structure Updates
To support the datasets above:
1. Extend `scripts/seed_data.py` with a `SeedDataset` registry so each dataset (currencies, units, future defaults) can declare its loader/upsert function and optional dependencies.
2. Add a `--dataset` CLI selector for targeted seeding while keeping `--all` as the default for `setup_database.py` integrations.
3. Update `scripts/setup_database.py` to:
- Run migration ensuring `measurement_unit` table exists.
- Execute the unit seeder after currencies when `--seed-data` is supplied.
- Verify post-seed counts, logging which dataset was inserted/updated.
4. Adjust UI routes to load measurement units from the database and remove the hard-coded list once the table is available.
This plan aligns with the TODO item for seeding initial data and lays the groundwork for consolidating migrations around a single baseline file that introduces both the schema and seed data in an idempotent manner.

View File

@@ -1,101 +0,0 @@
# Staging Environment Setup
This guide outlines how to provision and validate the CalMiner staging database using `scripts/setup_database.py`. It complements the local and CI-focused instructions in `docs/quickstart.md`.
## Prerequisites
- Network access to the staging infrastructure (VPN or bastion, as required by ops).
- Provisioned PostgreSQL instance with superuser or delegated admin credentials for maintenance.
- Application credentials (role + password) dedicated to CalMiner staging.
- The application repository checked out with Python dependencies installed (`pip install -r requirements.txt`).
- Optional but recommended: a writable directory (for example `reports/`) to capture setup logs.
> Replace the placeholder values in the examples below with the actual host, port, and credential details supplied by ops.
## Environment Configuration
Populate the following environment variables before invoking the setup script. Store them in a secure location such as `config/setup_staging.env` (excluded from source control) and load them with `dotenv` or your shell profile.
| Variable | Description |
| --- | --- |
| `DATABASE_HOST` | Staging PostgreSQL hostname or IP (for example `staging-db.internal`). |
| `DATABASE_PORT` | Port exposed by the staging PostgreSQL service (default `5432`). |
| `DATABASE_NAME` | CalMiner staging database name (for example `calminer_staging`). |
| `DATABASE_USER` | Application role used by the FastAPI app (for example `calminer_app`). |
| `DATABASE_PASSWORD` | Password for the application role. |
| `DATABASE_SCHEMA` | Optional non-public schema; omit or set to `public` otherwise. |
| `DATABASE_SUPERUSER` | Administrative role with rights to create roles/databases (for example `calminer_admin`). |
| `DATABASE_SUPERUSER_PASSWORD` | Password for the administrative role. |
| `DATABASE_SUPERUSER_DB` | Database to connect to for admin tasks (default `postgres`). |
| `DATABASE_ADMIN_URL` | Optional DSN that overrides the granular admin settings above. |
You may also set `DATABASE_URL` for application runtime convenience, but the setup script only requires the values listed in the table.
### Loading Variables (PowerShell example)
```powershell
$env:DATABASE_HOST = "staging-db.internal"
$env:DATABASE_PORT = "5432"
$env:DATABASE_NAME = "calminer_staging"
$env:DATABASE_USER = "calminer_app"
$env:DATABASE_PASSWORD = "<app-password>"
$env:DATABASE_SUPERUSER = "calminer_admin"
$env:DATABASE_SUPERUSER_PASSWORD = "<admin-password>"
$env:DATABASE_SUPERUSER_DB = "postgres"
```
For bash shells, export the same variables using `export VARIABLE=value` or load them through `dotenv`.
## Setup Workflow
Run the setup script in three phases to validate idempotency and capture diagnostics:
1. **Dry run (diagnostic):**
```powershell
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v `
2>&1 | Tee-Object -FilePath reports/setup_staging_dry_run.log
```
Confirm that the script reports planned actions without failures. If the application role is missing, a dry run will log skip messages until a live run creates the role.
2. **Apply changes:**
```powershell
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v `
2>&1 | Tee-Object -FilePath reports/setup_staging_apply.log
```
Verify the log for successful database creation, role grants, migration execution, and seed verification.
3. **Post-apply dry run:**
```powershell
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v `
2>&1 | Tee-Object -FilePath reports/setup_staging_post_apply.log
```
This run should confirm that all schema objects, migrations, and seed data are already in place.
## Validation Checklist
- [ ] Confirm the staging application can connect using the application DSN (for example, run `pytest tests/e2e/test_smoke.py` against staging or trigger a smoke test workflow).
- [ ] Inspect `schema_migrations` to ensure the baseline migration (`000_base.sql`) is recorded.
- [ ] Spot-check seeded reference data (`currency`, `measurement_unit`) for correctness.
- [ ] Capture and archive the three setup logs in a shared location for audit purposes.
## Troubleshooting
- If the dry run reports skipped actions because the application role does not exist, proceed with the live run; subsequent dry runs will validate as expected.
- Connection errors usually stem from network restrictions or incorrect credentials. Validate reachability with `psql` or `pg_isready` using the same host/port and credentials.
- For permission issues during migrations or seeding, confirm the admin role has rights on the target database and that the application role inherits the expected privileges.
## Rollback Guidance
- Database creation and role grants register rollback actions when not running in dry-run mode. If a later step fails, rerun the script without `--dry-run`; it will automatically revoke grants or drop newly created resources as part of the rollback routine.
- For staged environments where manual intervention is required, coordinate with ops before dropping databases or roles.
## Next Steps
- Keep this document updated as staging infrastructure evolves (for example, when migrating to managed services or rotating credentials).
- Once staging validation is complete, summarize the outcome in `.github/instructions/DONE.TODO.md` and cross-link the relevant log files.

View File

@@ -17,6 +17,7 @@ from routes.currencies import router as currencies_router
from routes.simulations import router as simulations_router from routes.simulations import router as simulations_router
from routes.maintenance import router as maintenance_router from routes.maintenance import router as maintenance_router
from routes.settings import router as settings_router from routes.settings import router as settings_router
from routes.users import router as users_router
# Initialize database schema # Initialize database schema
Base.metadata.create_all(bind=engine) Base.metadata.create_all(bind=engine)
@@ -30,6 +31,12 @@ async def json_validation(
) -> Response: ) -> Response:
return await validate_json(request, call_next) return await validate_json(request, call_next)
@app.get("/health", summary="Container health probe")
async def health() -> dict[str, str]:
return {"status": "ok"}
app.mount("/static", StaticFiles(directory="static"), name="static") app.mount("/static", StaticFiles(directory="static"), name="static")
# Include API routers # Include API routers
@@ -46,3 +53,4 @@ app.include_router(reporting_router)
app.include_router(currencies_router) app.include_router(currencies_router)
app.include_router(settings_router) app.include_router(settings_router)
app.include_router(ui_router) app.include_router(ui_router)
app.include_router(users_router)

View File

@@ -4,7 +4,10 @@ from fastapi import HTTPException, Request, Response
MiddlewareCallNext = Callable[[Request], Awaitable[Response]] MiddlewareCallNext = Callable[[Request], Awaitable[Response]]
async def validate_json(request: Request, call_next: MiddlewareCallNext) -> Response:
async def validate_json(
request: Request, call_next: MiddlewareCallNext
) -> Response:
# Only validate JSON for requests with a body # Only validate JSON for requests with a body
if request.method in ("POST", "PUT", "PATCH"): if request.method in ("POST", "PUT", "PATCH"):
try: try:

View File

@@ -2,5 +2,9 @@
models package initializer. Import key models so they're registered models package initializer. Import key models so they're registered
with the shared Base.metadata when the package is imported by tests. with the shared Base.metadata when the package is imported by tests.
""" """
from . import application_setting # noqa: F401 from . import application_setting # noqa: F401
from . import currency # noqa: F401 from . import currency # noqa: F401
from . import role # noqa: F401
from . import user # noqa: F401
from . import theme_setting # noqa: F401

View File

@@ -14,15 +14,24 @@ class ApplicationSetting(Base):
id: Mapped[int] = mapped_column(primary_key=True, index=True) id: Mapped[int] = mapped_column(primary_key=True, index=True)
key: Mapped[str] = mapped_column(String(128), unique=True, nullable=False) key: Mapped[str] = mapped_column(String(128), unique=True, nullable=False)
value: Mapped[str] = mapped_column(Text, nullable=False) value: Mapped[str] = mapped_column(Text, nullable=False)
value_type: Mapped[str] = mapped_column(String(32), nullable=False, default="string") value_type: Mapped[str] = mapped_column(
category: Mapped[str] = mapped_column(String(32), nullable=False, default="general") String(32), nullable=False, default="string"
)
category: Mapped[str] = mapped_column(
String(32), nullable=False, default="general"
)
description: Mapped[Optional[str]] = mapped_column(Text, nullable=True) description: Mapped[Optional[str]] = mapped_column(Text, nullable=True)
is_editable: Mapped[bool] = mapped_column(Boolean, nullable=False, default=True) is_editable: Mapped[bool] = mapped_column(
Boolean, nullable=False, default=True
)
created_at: Mapped[datetime] = mapped_column( created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), server_default=func.now(), nullable=False DateTime(timezone=True), server_default=func.now(), nullable=False
) )
updated_at: Mapped[datetime] = mapped_column( updated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), server_default=func.now(), onupdate=func.now(), nullable=False DateTime(timezone=True),
server_default=func.now(),
onupdate=func.now(),
nullable=False,
) )
def __repr__(self) -> str: def __repr__(self) -> str:

View File

@@ -29,8 +29,9 @@ class Capex(Base):
@currency_code.setter @currency_code.setter
def currency_code(self, value: str) -> None: def currency_code(self, value: str) -> None:
# store pending code so application code or migrations can pick it up # store pending code so application code or migrations can pick it up
setattr(self, "_currency_code_pending", setattr(
(value or "USD").strip().upper()) self, "_currency_code_pending", (value or "USD").strip().upper()
)
# SQLAlchemy event handlers to ensure currency_id is set before insert/update # SQLAlchemy event handlers to ensure currency_id is set before insert/update
@@ -42,22 +43,27 @@ def _resolve_currency(mapper, connection, target):
return return
code = getattr(target, "_currency_code_pending", None) or "USD" code = getattr(target, "_currency_code_pending", None) or "USD"
# Try to find existing currency id # Try to find existing currency id
row = connection.execute(text("SELECT id FROM currency WHERE code = :code"), { row = connection.execute(
"code": code}).fetchone() text("SELECT id FROM currency WHERE code = :code"), {"code": code}
).fetchone()
if row: if row:
cid = row[0] cid = row[0]
else: else:
# Insert new currency and attempt to get lastrowid # Insert new currency and attempt to get lastrowid
res = connection.execute( res = connection.execute(
text("INSERT INTO currency (code, name, symbol, is_active) VALUES (:code, :name, :symbol, :active)"), text(
"INSERT INTO currency (code, name, symbol, is_active) VALUES (:code, :name, :symbol, :active)"
),
{"code": code, "name": code, "symbol": None, "active": True}, {"code": code, "name": code, "symbol": None, "active": True},
) )
try: try:
cid = res.lastrowid cid = res.lastrowid
except Exception: except Exception:
# fallback: select after insert # fallback: select after insert
cid = connection.execute(text("SELECT id FROM currency WHERE code = :code"), { cid = connection.execute(
"code": code}).scalar() text("SELECT id FROM currency WHERE code = :code"),
{"code": code},
).scalar()
target.currency_id = cid target.currency_id = cid

View File

@@ -14,8 +14,11 @@ class Currency(Base):
# reverse relationships (optional) # reverse relationships (optional)
capex_items = relationship( capex_items = relationship(
"Capex", back_populates="currency", lazy="select") "Capex", back_populates="currency", lazy="select"
)
opex_items = relationship("Opex", back_populates="currency", lazy="select") opex_items = relationship("Opex", back_populates="currency", lazy="select")
def __repr__(self): def __repr__(self):
return f"<Currency code={self.code} name={self.name} symbol={self.symbol}>" return (
f"<Currency code={self.code} name={self.name} symbol={self.symbol}>"
)

View File

@@ -28,28 +28,34 @@ class Opex(Base):
@currency_code.setter @currency_code.setter
def currency_code(self, value: str) -> None: def currency_code(self, value: str) -> None:
setattr(self, "_currency_code_pending", setattr(
(value or "USD").strip().upper()) self, "_currency_code_pending", (value or "USD").strip().upper()
)
def _resolve_currency_opex(mapper, connection, target): def _resolve_currency_opex(mapper, connection, target):
if getattr(target, "currency_id", None): if getattr(target, "currency_id", None):
return return
code = getattr(target, "_currency_code_pending", None) or "USD" code = getattr(target, "_currency_code_pending", None) or "USD"
row = connection.execute(text("SELECT id FROM currency WHERE code = :code"), { row = connection.execute(
"code": code}).fetchone() text("SELECT id FROM currency WHERE code = :code"), {"code": code}
).fetchone()
if row: if row:
cid = row[0] cid = row[0]
else: else:
res = connection.execute( res = connection.execute(
text("INSERT INTO currency (code, name, symbol, is_active) VALUES (:code, :name, :symbol, :active)"), text(
"INSERT INTO currency (code, name, symbol, is_active) VALUES (:code, :name, :symbol, :active)"
),
{"code": code, "name": code, "symbol": None, "active": True}, {"code": code, "name": code, "symbol": None, "active": True},
) )
try: try:
cid = res.lastrowid cid = res.lastrowid
except Exception: except Exception:
cid = connection.execute(text("SELECT id FROM currency WHERE code = :code"), { cid = connection.execute(
"code": code}).scalar() text("SELECT id FROM currency WHERE code = :code"),
{"code": code},
).scalar()
target.currency_id = cid target.currency_id = cid

View File

@@ -10,14 +10,17 @@ class Parameter(Base):
id: Mapped[int] = mapped_column(primary_key=True, index=True) id: Mapped[int] = mapped_column(primary_key=True, index=True)
scenario_id: Mapped[int] = mapped_column( scenario_id: Mapped[int] = mapped_column(
ForeignKey("scenario.id"), nullable=False) ForeignKey("scenario.id"), nullable=False
)
name: Mapped[str] = mapped_column(nullable=False) name: Mapped[str] = mapped_column(nullable=False)
value: Mapped[float] = mapped_column(nullable=False) value: Mapped[float] = mapped_column(nullable=False)
distribution_id: Mapped[Optional[int]] = mapped_column( distribution_id: Mapped[Optional[int]] = mapped_column(
ForeignKey("distribution.id"), nullable=True) ForeignKey("distribution.id"), nullable=True
)
distribution_type: Mapped[Optional[str]] = mapped_column(nullable=True) distribution_type: Mapped[Optional[str]] = mapped_column(nullable=True)
distribution_parameters: Mapped[Optional[Dict[str, Any]]] = mapped_column( distribution_parameters: Mapped[Optional[Dict[str, Any]]] = mapped_column(
JSON, nullable=True) JSON, nullable=True
)
scenario = relationship("Scenario", back_populates="parameters") scenario = relationship("Scenario", back_populates="parameters")
distribution = relationship("Distribution") distribution = relationship("Distribution")

View File

@@ -14,7 +14,8 @@ class ProductionOutput(Base):
unit_symbol = Column(String(16), nullable=True) unit_symbol = Column(String(16), nullable=True)
scenario = relationship( scenario = relationship(
"Scenario", back_populates="production_output_items") "Scenario", back_populates="production_output_items"
)
def __repr__(self): def __repr__(self):
return ( return (

13
models/role.py Normal file
View File

@@ -0,0 +1,13 @@
from sqlalchemy import Column, Integer, String
from sqlalchemy.orm import relationship
from config.database import Base
class Role(Base):
__tablename__ = "roles"
id = Column(Integer, primary_key=True, index=True)
name = Column(String, unique=True, index=True)
users = relationship("User", back_populates="role")

View File

@@ -20,19 +20,16 @@ class Scenario(Base):
updated_at = Column(DateTime(timezone=True), onupdate=func.now()) updated_at = Column(DateTime(timezone=True), onupdate=func.now())
parameters = relationship("Parameter", back_populates="scenario") parameters = relationship("Parameter", back_populates="scenario")
simulation_results = relationship( simulation_results = relationship(
SimulationResult, back_populates="scenario") SimulationResult, back_populates="scenario"
capex_items = relationship( )
Capex, back_populates="scenario") capex_items = relationship(Capex, back_populates="scenario")
opex_items = relationship( opex_items = relationship(Opex, back_populates="scenario")
Opex, back_populates="scenario") consumption_items = relationship(Consumption, back_populates="scenario")
consumption_items = relationship(
Consumption, back_populates="scenario")
production_output_items = relationship( production_output_items = relationship(
ProductionOutput, back_populates="scenario") ProductionOutput, back_populates="scenario"
equipment_items = relationship( )
Equipment, back_populates="scenario") equipment_items = relationship(Equipment, back_populates="scenario")
maintenance_items = relationship( maintenance_items = relationship(Maintenance, back_populates="scenario")
Maintenance, back_populates="scenario")
# relationships can be defined later # relationships can be defined later
def __repr__(self): def __repr__(self):

15
models/theme_setting.py Normal file
View File

@@ -0,0 +1,15 @@
from sqlalchemy import Column, Integer, String
from config.database import Base
class ThemeSetting(Base):
__tablename__ = "theme_settings"
id = Column(Integer, primary_key=True, index=True)
theme_name = Column(String, unique=True, index=True)
primary_color = Column(String)
secondary_color = Column(String)
accent_color = Column(String)
background_color = Column(String)
text_color = Column(String)

23
models/user.py Normal file
View File

@@ -0,0 +1,23 @@
from sqlalchemy import Column, Integer, String, ForeignKey
from sqlalchemy.orm import relationship
from config.database import Base
from services.security import get_password_hash, verify_password
class User(Base):
__tablename__ = "users"
id = Column(Integer, primary_key=True, index=True)
username = Column(String, unique=True, index=True)
email = Column(String, unique=True, index=True)
hashed_password = Column(String)
role_id = Column(Integer, ForeignKey("roles.id"))
role = relationship("Role", back_populates="users")
def set_password(self, password: str):
self.hashed_password = get_password_hash(password)
def check_password(self, password: str) -> bool:
return verify_password(password, str(self.hashed_password))

16
pyproject.toml Normal file
View File

@@ -0,0 +1,16 @@
[tool.black]
line-length = 80
target-version = ['py310']
include = '\\.pyi?$'
exclude = '''
/(
.git
| .hg
| .mypy_cache
| .tox
| .venv
| build
| dist
)/
'''

View File

@@ -1,5 +1,7 @@
playwright
pytest pytest
pytest-cov pytest-cov
pytest-httpx pytest-httpx
playwright
pytest-playwright pytest-playwright
python-jose
ruff

View File

@@ -1,4 +1,5 @@
fastapi fastapi
pydantic>=2.0,<3.0
uvicorn uvicorn
sqlalchemy sqlalchemy
psycopg2-binary psycopg2-binary
@@ -7,3 +8,5 @@ httpx
jinja2 jinja2
pandas pandas
numpy numpy
passlib
python-jose

View File

@@ -36,7 +36,9 @@ class ConsumptionRead(ConsumptionBase):
model_config = ConfigDict(from_attributes=True) model_config = ConfigDict(from_attributes=True)
@router.post("/", response_model=ConsumptionRead, status_code=status.HTTP_201_CREATED) @router.post(
"/", response_model=ConsumptionRead, status_code=status.HTTP_201_CREATED
)
def create_consumption(item: ConsumptionCreate, db: Session = Depends(get_db)): def create_consumption(item: ConsumptionCreate, db: Session = Depends(get_db)):
db_item = Consumption(**item.model_dump()) db_item = Consumption(**item.model_dump())
db.add(db_item) db.add(db_item)

View File

@@ -73,7 +73,8 @@ def create_capex(item: CapexCreate, db: Session = Depends(get_db)):
if not cid: if not cid:
code = (payload.pop("currency_code", "USD") or "USD").strip().upper() code = (payload.pop("currency_code", "USD") or "USD").strip().upper()
currency_cls = __import__( currency_cls = __import__(
"models.currency", fromlist=["Currency"]).Currency "models.currency", fromlist=["Currency"]
).Currency
currency = db.query(currency_cls).filter_by(code=code).one_or_none() currency = db.query(currency_cls).filter_by(code=code).one_or_none()
if currency is None: if currency is None:
currency = currency_cls(code=code, name=code, symbol=None) currency = currency_cls(code=code, name=code, symbol=None)
@@ -100,7 +101,8 @@ def create_opex(item: OpexCreate, db: Session = Depends(get_db)):
if not cid: if not cid:
code = (payload.pop("currency_code", "USD") or "USD").strip().upper() code = (payload.pop("currency_code", "USD") or "USD").strip().upper()
currency_cls = __import__( currency_cls = __import__(
"models.currency", fromlist=["Currency"]).Currency "models.currency", fromlist=["Currency"]
).Currency
currency = db.query(currency_cls).filter_by(code=code).one_or_none() currency = db.query(currency_cls).filter_by(code=code).one_or_none()
if currency is None: if currency is None:
currency = currency_cls(code=code, name=code, symbol=None) currency = currency_cls(code=code, name=code, symbol=None)

View File

@@ -1,4 +1,4 @@
from typing import Dict, List, Optional from typing import List, Optional
from fastapi import APIRouter, Depends, HTTPException, Query, status from fastapi import APIRouter, Depends, HTTPException, Query, status
from pydantic import BaseModel, ConfigDict, Field, field_validator from pydantic import BaseModel, ConfigDict, Field, field_validator
@@ -97,20 +97,20 @@ def _ensure_default_currency(db: Session) -> Currency:
def _get_currency_or_404(db: Session, code: str) -> Currency: def _get_currency_or_404(db: Session, code: str) -> Currency:
normalized = code.strip().upper() normalized = code.strip().upper()
currency = ( currency = (
db.query(Currency) db.query(Currency).filter(Currency.code == normalized).one_or_none()
.filter(Currency.code == normalized)
.one_or_none()
) )
if currency is None: if currency is None:
raise HTTPException( raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail="Currency not found") status_code=status.HTTP_404_NOT_FOUND, detail="Currency not found"
)
return currency return currency
@router.get("/", response_model=List[CurrencyRead]) @router.get("/", response_model=List[CurrencyRead])
def list_currencies( def list_currencies(
include_inactive: bool = Query( include_inactive: bool = Query(
False, description="Include inactive currencies"), False, description="Include inactive currencies"
),
db: Session = Depends(get_db), db: Session = Depends(get_db),
): ):
_ensure_default_currency(db) _ensure_default_currency(db)
@@ -121,14 +121,12 @@ def list_currencies(
return currencies return currencies
@router.post("/", response_model=CurrencyRead, status_code=status.HTTP_201_CREATED) @router.post(
"/", response_model=CurrencyRead, status_code=status.HTTP_201_CREATED
)
def create_currency(payload: CurrencyCreate, db: Session = Depends(get_db)): def create_currency(payload: CurrencyCreate, db: Session = Depends(get_db)):
code = payload.code code = payload.code
existing = ( existing = db.query(Currency).filter(Currency.code == code).one_or_none()
db.query(Currency)
.filter(Currency.code == code)
.one_or_none()
)
if existing is not None: if existing is not None:
raise HTTPException( raise HTTPException(
status_code=status.HTTP_409_CONFLICT, status_code=status.HTTP_409_CONFLICT,
@@ -148,7 +146,9 @@ def create_currency(payload: CurrencyCreate, db: Session = Depends(get_db)):
@router.put("/{code}", response_model=CurrencyRead) @router.put("/{code}", response_model=CurrencyRead)
def update_currency(code: str, payload: CurrencyUpdate, db: Session = Depends(get_db)): def update_currency(
code: str, payload: CurrencyUpdate, db: Session = Depends(get_db)
):
currency = _get_currency_or_404(db, code) currency = _get_currency_or_404(db, code)
if payload.name is not None: if payload.name is not None:
@@ -175,7 +175,9 @@ def update_currency(code: str, payload: CurrencyUpdate, db: Session = Depends(ge
@router.patch("/{code}/activation", response_model=CurrencyRead) @router.patch("/{code}/activation", response_model=CurrencyRead)
def toggle_currency_activation(code: str, body: CurrencyActivation, db: Session = Depends(get_db)): def toggle_currency_activation(
code: str, body: CurrencyActivation, db: Session = Depends(get_db)
):
currency = _get_currency_or_404(db, code) currency = _get_currency_or_404(db, code)
code_value = getattr(currency, "code") code_value = getattr(currency, "code")
if code_value == DEFAULT_CURRENCY_CODE and body.is_active is False: if code_value == DEFAULT_CURRENCY_CODE and body.is_active is False:

View File

@@ -22,7 +22,9 @@ class DistributionRead(DistributionCreate):
@router.post("/", response_model=DistributionRead) @router.post("/", response_model=DistributionRead)
async def create_distribution(dist: DistributionCreate, db: Session = Depends(get_db)): async def create_distribution(
dist: DistributionCreate, db: Session = Depends(get_db)
):
db_dist = Distribution(**dist.model_dump()) db_dist = Distribution(**dist.model_dump())
db.add(db_dist) db.add(db_dist)
db.commit() db.commit()

View File

@@ -23,7 +23,9 @@ class EquipmentRead(EquipmentCreate):
@router.post("/", response_model=EquipmentRead) @router.post("/", response_model=EquipmentRead)
async def create_equipment(item: EquipmentCreate, db: Session = Depends(get_db)): async def create_equipment(
item: EquipmentCreate, db: Session = Depends(get_db)
):
db_item = Equipment(**item.model_dump()) db_item = Equipment(**item.model_dump())
db.add(db_item) db.add(db_item)
db.commit() db.commit()

View File

@@ -34,8 +34,9 @@ class MaintenanceRead(MaintenanceBase):
def _get_maintenance_or_404(db: Session, maintenance_id: int) -> Maintenance: def _get_maintenance_or_404(db: Session, maintenance_id: int) -> Maintenance:
maintenance = db.query(Maintenance).filter( maintenance = (
Maintenance.id == maintenance_id).first() db.query(Maintenance).filter(Maintenance.id == maintenance_id).first()
)
if maintenance is None: if maintenance is None:
raise HTTPException( raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, status_code=status.HTTP_404_NOT_FOUND,
@@ -44,8 +45,12 @@ def _get_maintenance_or_404(db: Session, maintenance_id: int) -> Maintenance:
return maintenance return maintenance
@router.post("/", response_model=MaintenanceRead, status_code=status.HTTP_201_CREATED) @router.post(
def create_maintenance(maintenance: MaintenanceCreate, db: Session = Depends(get_db)): "/", response_model=MaintenanceRead, status_code=status.HTTP_201_CREATED
)
def create_maintenance(
maintenance: MaintenanceCreate, db: Session = Depends(get_db)
):
db_maintenance = Maintenance(**maintenance.model_dump()) db_maintenance = Maintenance(**maintenance.model_dump())
db.add(db_maintenance) db.add(db_maintenance)
db.commit() db.commit()
@@ -54,7 +59,9 @@ def create_maintenance(maintenance: MaintenanceCreate, db: Session = Depends(get
@router.get("/", response_model=List[MaintenanceRead]) @router.get("/", response_model=List[MaintenanceRead])
def list_maintenance(skip: int = 0, limit: int = 100, db: Session = Depends(get_db)): def list_maintenance(
skip: int = 0, limit: int = 100, db: Session = Depends(get_db)
):
return db.query(Maintenance).offset(skip).limit(limit).all() return db.query(Maintenance).offset(skip).limit(limit).all()

View File

@@ -30,12 +30,15 @@ class ParameterCreate(BaseModel):
return None return None
if normalized not in {"normal", "uniform", "triangular"}: if normalized not in {"normal", "uniform", "triangular"}:
raise ValueError( raise ValueError(
"distribution_type must be normal, uniform, or triangular") "distribution_type must be normal, uniform, or triangular"
)
return normalized return normalized
@field_validator("distribution_parameters") @field_validator("distribution_parameters")
@classmethod @classmethod
def empty_dict_to_none(cls, value: Optional[Dict[str, Any]]) -> Optional[Dict[str, Any]]: def empty_dict_to_none(
cls, value: Optional[Dict[str, Any]]
) -> Optional[Dict[str, Any]]:
if value is None: if value is None:
return None return None
return value or None return value or None
@@ -45,6 +48,7 @@ class ParameterRead(ParameterCreate):
id: int id: int
model_config = ConfigDict(from_attributes=True) model_config = ConfigDict(from_attributes=True)
@router.post("/", response_model=ParameterRead) @router.post("/", response_model=ParameterRead)
def create_parameter(param: ParameterCreate, db: Session = Depends(get_db)): def create_parameter(param: ParameterCreate, db: Session = Depends(get_db)):
scen = db.query(Scenario).filter(Scenario.id == param.scenario_id).first() scen = db.query(Scenario).filter(Scenario.id == param.scenario_id).first()
@@ -55,11 +59,15 @@ def create_parameter(param: ParameterCreate, db: Session = Depends(get_db)):
distribution_parameters = param.distribution_parameters distribution_parameters = param.distribution_parameters
if distribution_id is not None: if distribution_id is not None:
distribution = db.query(Distribution).filter( distribution = (
Distribution.id == distribution_id).first() db.query(Distribution)
.filter(Distribution.id == distribution_id)
.first()
)
if not distribution: if not distribution:
raise HTTPException( raise HTTPException(
status_code=404, detail="Distribution not found") status_code=404, detail="Distribution not found"
)
distribution_type = distribution.distribution_type distribution_type = distribution.distribution_type
distribution_parameters = distribution.parameters or None distribution_parameters = distribution.parameters or None

View File

@@ -36,8 +36,14 @@ class ProductionOutputRead(ProductionOutputBase):
model_config = ConfigDict(from_attributes=True) model_config = ConfigDict(from_attributes=True)
@router.post("/", response_model=ProductionOutputRead, status_code=status.HTTP_201_CREATED) @router.post(
def create_production(item: ProductionOutputCreate, db: Session = Depends(get_db)): "/",
response_model=ProductionOutputRead,
status_code=status.HTTP_201_CREATED,
)
def create_production(
item: ProductionOutputCreate, db: Session = Depends(get_db)
):
db_item = ProductionOutput(**item.model_dump()) db_item = ProductionOutput(**item.model_dump())
db.add(db_item) db.add(db_item)
db.commit() db.commit()

View File

@@ -24,6 +24,7 @@ class ScenarioRead(ScenarioCreate):
updated_at: Optional[datetime] = None updated_at: Optional[datetime] = None
model_config = ConfigDict(from_attributes=True) model_config = ConfigDict(from_attributes=True)
@router.post("/", response_model=ScenarioRead) @router.post("/", response_model=ScenarioRead)
def create_scenario(scenario: ScenarioCreate, db: Session = Depends(get_db)): def create_scenario(scenario: ScenarioCreate, db: Session = Depends(get_db)):
db_s = db.query(Scenario).filter(Scenario.name == scenario.name).first() db_s = db.query(Scenario).filter(Scenario.name == scenario.name).first()

View File

@@ -11,6 +11,8 @@ from services.settings import (
list_css_env_override_rows, list_css_env_override_rows,
read_css_color_env_overrides, read_css_color_env_overrides,
update_css_color_settings, update_css_color_settings,
get_theme_settings,
save_theme_settings,
) )
router = APIRouter(prefix="/api/settings", tags=["Settings"]) router = APIRouter(prefix="/api/settings", tags=["Settings"])
@@ -49,8 +51,7 @@ def read_css_settings(db: Session = Depends(get_db)) -> CSSSettingsResponse:
values = get_css_color_settings(db) values = get_css_color_settings(db)
env_overrides = read_css_color_env_overrides() env_overrides = read_css_color_env_overrides()
env_sources = [ env_sources = [
EnvOverride(**row) EnvOverride(**row) for row in list_css_env_override_rows()
for row in list_css_env_override_rows()
] ]
except ValueError as exc: except ValueError as exc:
raise HTTPException( raise HTTPException(
@@ -64,14 +65,17 @@ def read_css_settings(db: Session = Depends(get_db)) -> CSSSettingsResponse:
) )
@router.put("/css", response_model=CSSSettingsResponse, status_code=status.HTTP_200_OK) @router.put(
def update_css_settings(payload: CSSSettingsPayload, db: Session = Depends(get_db)) -> CSSSettingsResponse: "/css", response_model=CSSSettingsResponse, status_code=status.HTTP_200_OK
)
def update_css_settings(
payload: CSSSettingsPayload, db: Session = Depends(get_db)
) -> CSSSettingsResponse:
try: try:
values = update_css_color_settings(db, payload.variables) values = update_css_color_settings(db, payload.variables)
env_overrides = read_css_color_env_overrides() env_overrides = read_css_color_env_overrides()
env_sources = [ env_sources = [
EnvOverride(**row) EnvOverride(**row) for row in list_css_env_override_rows()
for row in list_css_env_override_rows()
] ]
except ValueError as exc: except ValueError as exc:
raise HTTPException( raise HTTPException(
@@ -83,3 +87,24 @@ def update_css_settings(payload: CSSSettingsPayload, db: Session = Depends(get_d
env_overrides=env_overrides, env_overrides=env_overrides,
env_sources=env_sources, env_sources=env_sources,
) )
class ThemeSettings(BaseModel):
theme_name: str
primary_color: str
secondary_color: str
accent_color: str
background_color: str
text_color: str
@router.post("/theme")
async def update_theme(theme_data: ThemeSettings, db: Session = Depends(get_db)):
data_dict = theme_data.model_dump()
save_theme_settings(db, data_dict)
return {"message": "Theme updated", "theme": data_dict}
@router.get("/theme")
async def get_theme(db: Session = Depends(get_db)):
return get_theme_settings(db)

View File

@@ -43,7 +43,9 @@ class SimulationRunResponse(BaseModel):
summary: Dict[str, float | int] summary: Dict[str, float | int]
def _load_parameters(db: Session, scenario_id: int) -> List[SimulationParameterInput]: def _load_parameters(
db: Session, scenario_id: int
) -> List[SimulationParameterInput]:
db_params = ( db_params = (
db.query(Parameter) db.query(Parameter)
.filter(Parameter.scenario_id == scenario_id) .filter(Parameter.scenario_id == scenario_id)
@@ -60,17 +62,19 @@ def _load_parameters(db: Session, scenario_id: int) -> List[SimulationParameterI
@router.post("/run", response_model=SimulationRunResponse) @router.post("/run", response_model=SimulationRunResponse)
async def simulate(payload: SimulationRunRequest, db: Session = Depends(get_db)): async def simulate(
scenario = db.query(Scenario).filter( payload: SimulationRunRequest, db: Session = Depends(get_db)
Scenario.id == payload.scenario_id).first() ):
scenario = (
db.query(Scenario).filter(Scenario.id == payload.scenario_id).first()
)
if scenario is None: if scenario is None:
raise HTTPException( raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, status_code=status.HTTP_404_NOT_FOUND,
detail="Scenario not found", detail="Scenario not found",
) )
parameters = payload.parameters or _load_parameters( parameters = payload.parameters or _load_parameters(db, payload.scenario_id)
db, payload.scenario_id)
if not parameters: if not parameters:
raise HTTPException( raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, status_code=status.HTTP_400_BAD_REQUEST,

View File

@@ -53,7 +53,9 @@ router = APIRouter()
templates = Jinja2Templates(directory="templates") templates = Jinja2Templates(directory="templates")
def _context(request: Request, extra: Optional[Dict[str, Any]] = None) -> Dict[str, Any]: def _context(
request: Request, extra: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]:
payload: Dict[str, Any] = { payload: Dict[str, Any] = {
"request": request, "request": request,
"current_year": datetime.now(timezone.utc).year, "current_year": datetime.now(timezone.utc).year,
@@ -98,7 +100,9 @@ def _load_scenarios(db: Session) -> Dict[str, Any]:
def _load_parameters(db: Session) -> Dict[str, Any]: def _load_parameters(db: Session) -> Dict[str, Any]:
grouped: defaultdict[int, list[Dict[str, Any]]] = defaultdict(list) grouped: defaultdict[int, list[Dict[str, Any]]] = defaultdict(list)
for param in db.query(Parameter).order_by(Parameter.scenario_id, Parameter.id): for param in db.query(Parameter).order_by(
Parameter.scenario_id, Parameter.id
):
grouped[param.scenario_id].append( grouped[param.scenario_id].append(
{ {
"id": param.id, "id": param.id,
@@ -113,27 +117,20 @@ def _load_parameters(db: Session) -> Dict[str, Any]:
def _load_costs(db: Session) -> Dict[str, Any]: def _load_costs(db: Session) -> Dict[str, Any]:
capex_grouped: defaultdict[int, list[Dict[str, Any]]] = defaultdict(list) capex_grouped: defaultdict[int, list[Dict[str, Any]]] = defaultdict(list)
for capex in ( for capex in db.query(Capex).order_by(Capex.scenario_id, Capex.id).all():
db.query(Capex)
.order_by(Capex.scenario_id, Capex.id)
.all()
):
capex_grouped[int(getattr(capex, "scenario_id"))].append( capex_grouped[int(getattr(capex, "scenario_id"))].append(
{ {
"id": int(getattr(capex, "id")), "id": int(getattr(capex, "id")),
"scenario_id": int(getattr(capex, "scenario_id")), "scenario_id": int(getattr(capex, "scenario_id")),
"amount": float(getattr(capex, "amount", 0.0)), "amount": float(getattr(capex, "amount", 0.0)),
"description": getattr(capex, "description", "") or "", "description": getattr(capex, "description", "") or "",
"currency_code": getattr(capex, "currency_code", "USD") or "USD", "currency_code": getattr(capex, "currency_code", "USD")
or "USD",
} }
) )
opex_grouped: defaultdict[int, list[Dict[str, Any]]] = defaultdict(list) opex_grouped: defaultdict[int, list[Dict[str, Any]]] = defaultdict(list)
for opex in ( for opex in db.query(Opex).order_by(Opex.scenario_id, Opex.id).all():
db.query(Opex)
.order_by(Opex.scenario_id, Opex.id)
.all()
):
opex_grouped[int(getattr(opex, "scenario_id"))].append( opex_grouped[int(getattr(opex, "scenario_id"))].append(
{ {
"id": int(getattr(opex, "id")), "id": int(getattr(opex, "id")),
@@ -152,9 +149,15 @@ def _load_costs(db: Session) -> Dict[str, Any]:
def _load_currencies(db: Session) -> Dict[str, Any]: def _load_currencies(db: Session) -> Dict[str, Any]:
items: list[Dict[str, Any]] = [] items: list[Dict[str, Any]] = []
for c in db.query(Currency).filter_by(is_active=True).order_by(Currency.code).all(): for c in (
db.query(Currency)
.filter_by(is_active=True)
.order_by(Currency.code)
.all()
):
items.append( items.append(
{"id": c.code, "name": f"{c.name} ({c.code})", "symbol": c.symbol}) {"id": c.code, "name": f"{c.name} ({c.code})", "symbol": c.symbol}
)
if not items: if not items:
items.append({"id": "USD", "name": "US Dollar (USD)", "symbol": "$"}) items.append({"id": "USD", "name": "US Dollar (USD)", "symbol": "$"})
return {"currency_options": items} return {"currency_options": items}
@@ -261,9 +264,7 @@ def _load_production(db: Session) -> Dict[str, Any]:
def _load_equipment(db: Session) -> Dict[str, Any]: def _load_equipment(db: Session) -> Dict[str, Any]:
grouped: defaultdict[int, list[Dict[str, Any]]] = defaultdict(list) grouped: defaultdict[int, list[Dict[str, Any]]] = defaultdict(list)
for record in ( for record in (
db.query(Equipment) db.query(Equipment).order_by(Equipment.scenario_id, Equipment.id).all()
.order_by(Equipment.scenario_id, Equipment.id)
.all()
): ):
record_id = int(getattr(record, "id")) record_id = int(getattr(record, "id"))
scenario_id = int(getattr(record, "scenario_id")) scenario_id = int(getattr(record, "scenario_id"))
@@ -291,8 +292,9 @@ def _load_maintenance(db: Session) -> Dict[str, Any]:
scenario_id = int(getattr(record, "scenario_id")) scenario_id = int(getattr(record, "scenario_id"))
equipment_id = int(getattr(record, "equipment_id")) equipment_id = int(getattr(record, "equipment_id"))
equipment_obj = getattr(record, "equipment", None) equipment_obj = getattr(record, "equipment", None)
equipment_name = getattr( equipment_name = (
equipment_obj, "name", "") if equipment_obj else "" getattr(equipment_obj, "name", "") if equipment_obj else ""
)
maintenance_date = getattr(record, "maintenance_date", None) maintenance_date = getattr(record, "maintenance_date", None)
cost_value = float(getattr(record, "cost", 0.0)) cost_value = float(getattr(record, "cost", 0.0))
description = getattr(record, "description", "") or "" description = getattr(record, "description", "") or ""
@@ -303,7 +305,9 @@ def _load_maintenance(db: Session) -> Dict[str, Any]:
"scenario_id": scenario_id, "scenario_id": scenario_id,
"equipment_id": equipment_id, "equipment_id": equipment_id,
"equipment_name": equipment_name, "equipment_name": equipment_name,
"maintenance_date": maintenance_date.isoformat() if maintenance_date else "", "maintenance_date": (
maintenance_date.isoformat() if maintenance_date else ""
),
"cost": cost_value, "cost": cost_value,
"description": description, "description": description,
} }
@@ -339,8 +343,11 @@ def _load_simulations(db: Session) -> Dict[str, Any]:
for item in scenarios: for item in scenarios:
scenario_id = int(item["id"]) scenario_id = int(item["id"])
scenario_results = results_grouped.get(scenario_id, []) scenario_results = results_grouped.get(scenario_id, [])
summary = generate_report( summary = (
scenario_results) if scenario_results else generate_report([]) generate_report(scenario_results)
if scenario_results
else generate_report([])
)
runs.append( runs.append(
{ {
"scenario_id": scenario_id, "scenario_id": scenario_id,
@@ -395,11 +402,11 @@ def _load_dashboard(db: Session) -> Dict[str, Any]:
simulation_context = _load_simulations(db) simulation_context = _load_simulations(db)
simulation_runs = simulation_context["simulation_runs"] simulation_runs = simulation_context["simulation_runs"]
runs_by_scenario = { runs_by_scenario = {run["scenario_id"]: run for run in simulation_runs}
run["scenario_id"]: run for run in simulation_runs
}
def sum_amounts(grouped: Dict[int, list[Dict[str, Any]]], field: str = "amount") -> float: def sum_amounts(
grouped: Dict[int, list[Dict[str, Any]]], field: str = "amount"
) -> float:
total = 0.0 total = 0.0
for items in grouped.values(): for items in grouped.values():
for item in items: for item in items:
@@ -414,14 +421,18 @@ def _load_dashboard(db: Session) -> Dict[str, Any]:
total_production = sum_amounts(production_by_scenario) total_production = sum_amounts(production_by_scenario)
total_maintenance_cost = sum_amounts(maintenance_by_scenario, field="cost") total_maintenance_cost = sum_amounts(maintenance_by_scenario, field="cost")
total_parameters = sum(len(items) total_parameters = sum(
for items in parameters_by_scenario.values()) len(items) for items in parameters_by_scenario.values()
total_equipment = sum(len(items) )
for items in equipment_by_scenario.values()) total_equipment = sum(
total_maintenance_events = sum(len(items) len(items) for items in equipment_by_scenario.values()
for items in maintenance_by_scenario.values()) )
total_maintenance_events = sum(
len(items) for items in maintenance_by_scenario.values()
)
total_simulation_iterations = sum( total_simulation_iterations = sum(
run["iterations"] for run in simulation_runs) run["iterations"] for run in simulation_runs
)
scenario_rows: list[Dict[str, Any]] = [] scenario_rows: list[Dict[str, Any]] = []
scenario_labels: list[str] = [] scenario_labels: list[str] = []
@@ -501,20 +512,40 @@ def _load_dashboard(db: Session) -> Dict[str, Any]:
overall_report = generate_report(all_simulation_results) overall_report = generate_report(all_simulation_results)
overall_report_metrics = [ overall_report_metrics = [
{"label": "Runs", "value": _format_int( {
int(overall_report.get("count", 0)))}, "label": "Runs",
{"label": "Mean", "value": _format_decimal( "value": _format_int(int(overall_report.get("count", 0))),
float(overall_report.get("mean", 0.0)))}, },
{"label": "Median", "value": _format_decimal( {
float(overall_report.get("median", 0.0)))}, "label": "Mean",
{"label": "Std Dev", "value": _format_decimal( "value": _format_decimal(float(overall_report.get("mean", 0.0))),
float(overall_report.get("std_dev", 0.0)))}, },
{"label": "95th Percentile", "value": _format_decimal( {
float(overall_report.get("percentile_95", 0.0)))}, "label": "Median",
{"label": "VaR (95%)", "value": _format_decimal( "value": _format_decimal(float(overall_report.get("median", 0.0))),
float(overall_report.get("value_at_risk_95", 0.0)))}, },
{"label": "Expected Shortfall (95%)", "value": _format_decimal( {
float(overall_report.get("expected_shortfall_95", 0.0)))}, "label": "Std Dev",
"value": _format_decimal(float(overall_report.get("std_dev", 0.0))),
},
{
"label": "95th Percentile",
"value": _format_decimal(
float(overall_report.get("percentile_95", 0.0))
),
},
{
"label": "VaR (95%)",
"value": _format_decimal(
float(overall_report.get("value_at_risk_95", 0.0))
),
},
{
"label": "Expected Shortfall (95%)",
"value": _format_decimal(
float(overall_report.get("expected_shortfall_95", 0.0))
),
},
] ]
recent_simulations: list[Dict[str, Any]] = [ recent_simulations: list[Dict[str, Any]] = [
@@ -522,8 +553,12 @@ def _load_dashboard(db: Session) -> Dict[str, Any]:
"scenario_name": run["scenario_name"], "scenario_name": run["scenario_name"],
"iterations": run["iterations"], "iterations": run["iterations"],
"iterations_display": _format_int(run["iterations"]), "iterations_display": _format_int(run["iterations"]),
"mean_display": _format_decimal(float(run["summary"].get("mean", 0.0))), "mean_display": _format_decimal(
"p95_display": _format_decimal(float(run["summary"].get("percentile_95", 0.0))), float(run["summary"].get("mean", 0.0))
),
"p95_display": _format_decimal(
float(run["summary"].get("percentile_95", 0.0))
),
} }
for run in simulation_runs for run in simulation_runs
if run["iterations"] > 0 if run["iterations"] > 0
@@ -541,10 +576,20 @@ def _load_dashboard(db: Session) -> Dict[str, Any]:
maintenance_date = getattr(record, "maintenance_date", None) maintenance_date = getattr(record, "maintenance_date", None)
upcoming_maintenance.append( upcoming_maintenance.append(
{ {
"scenario_name": getattr(getattr(record, "scenario", None), "name", "Unknown"), "scenario_name": getattr(
"equipment_name": getattr(getattr(record, "equipment", None), "name", "Unknown"), getattr(record, "scenario", None), "name", "Unknown"
"date_display": maintenance_date.strftime("%Y-%m-%d") if maintenance_date else "", ),
"cost_display": _format_currency(float(getattr(record, "cost", 0.0))), "equipment_name": getattr(
getattr(record, "equipment", None), "name", "Unknown"
),
"date_display": (
maintenance_date.strftime("%Y-%m-%d")
if maintenance_date
else ""
),
"cost_display": _format_currency(
float(getattr(record, "cost", 0.0))
),
"description": getattr(record, "description", "") or "", "description": getattr(record, "description", "") or "",
} }
) )
@@ -552,9 +597,9 @@ def _load_dashboard(db: Session) -> Dict[str, Any]:
cost_chart_has_data = any(value > 0 for value in scenario_capex) or any( cost_chart_has_data = any(value > 0 for value in scenario_capex) or any(
value > 0 for value in scenario_opex value > 0 for value in scenario_opex
) )
activity_chart_has_data = any(value > 0 for value in activity_production) or any( activity_chart_has_data = any(
value > 0 for value in activity_consumption value > 0 for value in activity_production
) ) or any(value > 0 for value in activity_consumption)
scenario_cost_chart: Dict[str, list[Any]] = { scenario_cost_chart: Dict[str, list[Any]] = {
"labels": scenario_labels, "labels": scenario_labels,
@@ -573,14 +618,20 @@ def _load_dashboard(db: Session) -> Dict[str, Any]:
{"label": "CAPEX Total", "value": _format_currency(total_capex)}, {"label": "CAPEX Total", "value": _format_currency(total_capex)},
{"label": "OPEX Total", "value": _format_currency(total_opex)}, {"label": "OPEX Total", "value": _format_currency(total_opex)},
{"label": "Equipment Assets", "value": _format_int(total_equipment)}, {"label": "Equipment Assets", "value": _format_int(total_equipment)},
{"label": "Maintenance Events", {
"value": _format_int(total_maintenance_events)}, "label": "Maintenance Events",
"value": _format_int(total_maintenance_events),
},
{"label": "Consumption", "value": _format_decimal(total_consumption)}, {"label": "Consumption", "value": _format_decimal(total_consumption)},
{"label": "Production", "value": _format_decimal(total_production)}, {"label": "Production", "value": _format_decimal(total_production)},
{"label": "Simulation Iterations", {
"value": _format_int(total_simulation_iterations)}, "label": "Simulation Iterations",
{"label": "Maintenance Cost", "value": _format_int(total_simulation_iterations),
"value": _format_currency(total_maintenance_cost)}, },
{
"label": "Maintenance Cost",
"value": _format_currency(total_maintenance_cost),
},
] ]
return { return {
@@ -704,3 +755,30 @@ async def currencies_view(request: Request, db: Session = Depends(get_db)):
"""Render the currency administration page with full currency context.""" """Render the currency administration page with full currency context."""
context = _load_currency_settings(db) context = _load_currency_settings(db)
return _render(request, "currencies.html", context) return _render(request, "currencies.html", context)
@router.get("/login", response_class=HTMLResponse)
async def login_page(request: Request):
return _render(request, "login.html")
@router.get("/register", response_class=HTMLResponse)
async def register_page(request: Request):
return _render(request, "register.html")
@router.get("/profile", response_class=HTMLResponse)
async def profile_page(request: Request):
return _render(request, "profile.html")
@router.get("/forgot-password", response_class=HTMLResponse)
async def forgot_password_page(request: Request):
return _render(request, "forgot_password.html")
@router.get("/theme-settings", response_class=HTMLResponse)
async def theme_settings_page(request: Request, db: Session = Depends(get_db)):
"""Render the theme settings page."""
context = _load_css_settings(db)
return _render(request, "theme_settings.html", context)

107
routes/users.py Normal file
View File

@@ -0,0 +1,107 @@
from fastapi import APIRouter, Depends, HTTPException, status
from sqlalchemy.orm import Session
from config.database import get_db
from models.user import User
from services.security import create_access_token, get_current_user
from schemas.user import (
PasswordReset,
PasswordResetRequest,
UserCreate,
UserInDB,
UserLogin,
UserUpdate,
)
router = APIRouter(prefix="/users", tags=["users"])
@router.post("/register", response_model=UserInDB, status_code=status.HTTP_201_CREATED)
async def register_user(user: UserCreate, db: Session = Depends(get_db)):
db_user = db.query(User).filter(User.username == user.username).first()
if db_user:
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail="Username already registered")
db_user = db.query(User).filter(User.email == user.email).first()
if db_user:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, detail="Email already registered")
# Get or create default role
from models.role import Role
default_role = db.query(Role).filter(Role.name == "user").first()
if not default_role:
default_role = Role(name="user")
db.add(default_role)
db.commit()
db.refresh(default_role)
new_user = User(username=user.username, email=user.email,
role_id=default_role.id)
new_user.set_password(user.password)
db.add(new_user)
db.commit()
db.refresh(new_user)
return new_user
@router.post("/login")
async def login_user(user: UserLogin, db: Session = Depends(get_db)):
db_user = db.query(User).filter(User.username == user.username).first()
if not db_user or not db_user.check_password(user.password):
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED,
detail="Incorrect username or password")
access_token = create_access_token(subject=db_user.username)
return {"access_token": access_token, "token_type": "bearer"}
@router.get("/me")
async def read_users_me(current_user: User = Depends(get_current_user)):
return current_user
@router.put("/me", response_model=UserInDB)
async def update_user_me(user_update: UserUpdate, current_user: User = Depends(get_current_user), db: Session = Depends(get_db)):
if user_update.username and user_update.username != current_user.username:
existing_user = db.query(User).filter(
User.username == user_update.username).first()
if existing_user:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, detail="Username already taken")
setattr(current_user, "username", user_update.username)
if user_update.email and user_update.email != current_user.email:
existing_user = db.query(User).filter(
User.email == user_update.email).first()
if existing_user:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, detail="Email already registered")
setattr(current_user, "email", user_update.email)
if user_update.password:
current_user.set_password(user_update.password)
db.add(current_user)
db.commit()
db.refresh(current_user)
return current_user
@router.post("/forgot-password")
async def forgot_password(request: PasswordResetRequest):
# In a real application, this would send an email with a reset token
return {"message": "Password reset email sent (not really)"}
@router.post("/reset-password")
async def reset_password(request: PasswordReset, db: Session = Depends(get_db)):
# In a real application, the token would be verified
user = db.query(User).filter(User.username ==
request.token).first() # Use token as username for test
if not user:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, detail="Invalid token or user")
user.set_password(request.new_password)
db.add(user)
db.commit()
return {"message": "Password has been reset successfully"}

41
schemas/user.py Normal file
View File

@@ -0,0 +1,41 @@
from pydantic import BaseModel, ConfigDict
class UserCreate(BaseModel):
username: str
email: str
password: str
class UserInDB(BaseModel):
id: int
username: str
email: str
role_id: int
model_config = ConfigDict(from_attributes=True)
class UserLogin(BaseModel):
username: str
password: str
class UserUpdate(BaseModel):
username: str | None = None
email: str | None = None
password: str | None = None
class PasswordResetRequest(BaseModel):
email: str
class PasswordReset(BaseModel):
token: str
new_password: str
class Token(BaseModel):
access_token: str
token_type: str

View File

@@ -9,6 +9,7 @@ This script is intentionally cautious: it defaults to dry-run mode and will refu
if database connection settings are missing. It supports creating missing currency rows when `--create-missing` if database connection settings are missing. It supports creating missing currency rows when `--create-missing`
is provided. Always run against a development/staging database first. is provided. Always run against a development/staging database first.
""" """
from __future__ import annotations from __future__ import annotations
import argparse import argparse
import importlib import importlib
@@ -36,26 +37,42 @@ def load_database_url() -> str:
return getattr(db_module, "DATABASE_URL") return getattr(db_module, "DATABASE_URL")
def backfill(db_url: str, dry_run: bool = True, create_missing: bool = False) -> None: def backfill(
db_url: str, dry_run: bool = True, create_missing: bool = False
) -> None:
engine = create_engine(db_url) engine = create_engine(db_url)
with engine.begin() as conn: with engine.begin() as conn:
# Ensure currency table exists # Ensure currency table exists
res = conn.execute(text("SELECT name FROM sqlite_master WHERE type='table' AND name='currency';")) if db_url.startswith( if db_url.startswith("sqlite:"):
'sqlite:') else conn.execute(text("SELECT to_regclass('public.currency');")) conn.execute(
text(
"SELECT name FROM sqlite_master WHERE type='table' AND name='currency';"
)
)
else:
conn.execute(text("SELECT to_regclass('public.currency');"))
# Note: we don't strictly depend on the above - we assume migration was already applied # Note: we don't strictly depend on the above - we assume migration was already applied
# Helper: find or create currency by code # Helper: find or create currency by code
def find_currency_id(code: str): def find_currency_id(code: str):
r = conn.execute(text("SELECT id FROM currency WHERE code = :code"), { r = conn.execute(
"code": code}).fetchone() text("SELECT id FROM currency WHERE code = :code"),
{"code": code},
).fetchone()
if r: if r:
return r[0] return r[0]
if create_missing: if create_missing:
# insert and return id # insert and return id
conn.execute(text("INSERT INTO currency (code, name, symbol, is_active) VALUES (:c, :n, NULL, TRUE)"), { conn.execute(
"c": code, "n": code}) text(
r2 = conn.execute(text("SELECT id FROM currency WHERE code = :code"), { "INSERT INTO currency (code, name, symbol, is_active) VALUES (:c, :n, NULL, TRUE)"
"code": code}).fetchone() ),
{"c": code, "n": code},
)
r2 = conn.execute(
text("SELECT id FROM currency WHERE code = :code"),
{"code": code},
).fetchone()
if not r2: if not r2:
raise RuntimeError( raise RuntimeError(
f"Unable to determine currency ID for '{code}' after insert" f"Unable to determine currency ID for '{code}' after insert"
@@ -67,8 +84,15 @@ def backfill(db_url: str, dry_run: bool = True, create_missing: bool = False) ->
for table in ("capex", "opex"): for table in ("capex", "opex"):
# Check if currency_id column exists # Check if currency_id column exists
try: try:
cols = conn.execute(text(f"SELECT 1 FROM information_schema.columns WHERE table_name = '{table}' AND column_name = 'currency_id'")) if not db_url.startswith( cols = (
'sqlite:') else [(1,)] conn.execute(
text(
f"SELECT 1 FROM information_schema.columns WHERE table_name = '{table}' AND column_name = 'currency_id'"
)
)
if not db_url.startswith("sqlite:")
else [(1,)]
)
except Exception: except Exception:
cols = [(1,)] cols = [(1,)]
@@ -77,8 +101,11 @@ def backfill(db_url: str, dry_run: bool = True, create_missing: bool = False) ->
continue continue
# Find rows where currency_id IS NULL but currency_code exists # Find rows where currency_id IS NULL but currency_code exists
rows = conn.execute(text( rows = conn.execute(
f"SELECT id, currency_code FROM {table} WHERE currency_id IS NULL OR currency_id = ''")) text(
f"SELECT id, currency_code FROM {table} WHERE currency_id IS NULL OR currency_id = ''"
)
)
changed = 0 changed = 0
for r in rows: for r in rows:
rid = r[0] rid = r[0]
@@ -86,14 +113,20 @@ def backfill(db_url: str, dry_run: bool = True, create_missing: bool = False) ->
cid = find_currency_id(code) cid = find_currency_id(code)
if cid is None: if cid is None:
print( print(
f"Row {table}:{rid} has unknown currency code '{code}' and create_missing=False; skipping") f"Row {table}:{rid} has unknown currency code '{code}' and create_missing=False; skipping"
)
continue continue
if dry_run: if dry_run:
print( print(
f"[DRY RUN] Would set {table}.currency_id = {cid} for row id={rid} (code={code})") f"[DRY RUN] Would set {table}.currency_id = {cid} for row id={rid} (code={code})"
)
else: else:
conn.execute(text(f"UPDATE {table} SET currency_id = :cid WHERE id = :rid"), { conn.execute(
"cid": cid, "rid": rid}) text(
f"UPDATE {table} SET currency_id = :cid WHERE id = :rid"
),
{"cid": cid, "rid": rid},
)
changed += 1 changed += 1
print(f"{table}: processed, changed={changed} (dry_run={dry_run})") print(f"{table}: processed, changed={changed} (dry_run={dry_run})")
@@ -101,11 +134,19 @@ def backfill(db_url: str, dry_run: bool = True, create_missing: bool = False) ->
def main() -> None: def main() -> None:
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
description="Backfill currency_id from currency_code for capex/opex tables") description="Backfill currency_id from currency_code for capex/opex tables"
parser.add_argument("--dry-run", action="store_true", )
default=True, help="Show actions without writing") parser.add_argument(
parser.add_argument("--create-missing", action="store_true", "--dry-run",
help="Create missing currency rows in the currency table") action="store_true",
default=True,
help="Show actions without writing",
)
parser.add_argument(
"--create-missing",
action="store_true",
help="Create missing currency rows in the currency table",
)
args = parser.parse_args() args = parser.parse_args()
db = load_database_url() db = load_database_url()

View File

@@ -4,25 +4,30 @@ Checks only local file links (relative paths) and reports missing targets.
Run from the repository root using the project's Python environment. Run from the repository root using the project's Python environment.
""" """
import re import re
from pathlib import Path from pathlib import Path
ROOT = Path(__file__).resolve().parent.parent ROOT = Path(__file__).resolve().parent.parent
DOCS = ROOT / 'docs' DOCS = ROOT / "docs"
MD_LINK_RE = re.compile(r"\[([^\]]+)\]\(([^)]+)\)") MD_LINK_RE = re.compile(r"\[([^\]]+)\]\(([^)]+)\)")
errors = [] errors = []
for md in DOCS.rglob('*.md'): for md in DOCS.rglob("*.md"):
text = md.read_text(encoding='utf-8') text = md.read_text(encoding="utf-8")
for m in MD_LINK_RE.finditer(text): for m in MD_LINK_RE.finditer(text):
label, target = m.groups() label, target = m.groups()
# skip URLs # skip URLs
if target.startswith('http://') or target.startswith('https://') or target.startswith('#'): if (
target.startswith("http://")
or target.startswith("https://")
or target.startswith("#")
):
continue continue
# strip anchors # strip anchors
target_path = target.split('#')[0] target_path = target.split("#")[0]
# if link is to a directory index, allow # if link is to a directory index, allow
candidate = (md.parent / target_path).resolve() candidate = (md.parent / target_path).resolve()
if candidate.exists(): if candidate.exists():
@@ -30,14 +35,16 @@ for md in DOCS.rglob('*.md'):
# check common implicit index: target/ -> target/README.md or target/index.md # check common implicit index: target/ -> target/README.md or target/index.md
candidate_dir = md.parent / target_path candidate_dir = md.parent / target_path
if candidate_dir.is_dir(): if candidate_dir.is_dir():
if (candidate_dir / 'README.md').exists() or (candidate_dir / 'index.md').exists(): if (candidate_dir / "README.md").exists() or (
candidate_dir / "index.md"
).exists():
continue continue
errors.append((str(md.relative_to(ROOT)), target, label)) errors.append((str(md.relative_to(ROOT)), target, label))
if errors: if errors:
print('Broken local links found:') print("Broken local links found:")
for src, tgt, label in errors: for src, tgt, label in errors:
print(f'- {src} -> {tgt} ({label})') print(f"- {src} -> {tgt} ({label})")
exit(2) exit(2)
print('No broken local links detected.') print("No broken local links detected.")

View File

@@ -2,16 +2,17 @@
This is intentionally small and non-destructive; it touches only files under docs/ and makes safe changes. This is intentionally small and non-destructive; it touches only files under docs/ and makes safe changes.
""" """
import re import re
from pathlib import Path from pathlib import Path
DOCS = Path(__file__).resolve().parents[1] / "docs" DOCS = Path(__file__).resolve().parents[1] / "docs"
CODE_LANG_HINTS = { CODE_LANG_HINTS = {
'powershell': ('powershell',), "powershell": ("powershell",),
'bash': ('bash', 'sh'), "bash": ("bash", "sh"),
'sql': ('sql',), "sql": ("sql",),
'python': ('python',), "python": ("python",),
} }
@@ -19,48 +20,60 @@ def add_code_fence_language(match):
fence = match.group(0) fence = match.group(0)
inner = match.group(1) inner = match.group(1)
# If language already present, return unchanged # If language already present, return unchanged
if fence.startswith('```') and len(fence.splitlines()[0].strip()) > 3: if fence.startswith("```") and len(fence.splitlines()[0].strip()) > 3:
return fence return fence
# Try to infer language from the code content # Try to infer language from the code content
code = inner.strip().splitlines()[0] if inner.strip() else '' code = inner.strip().splitlines()[0] if inner.strip() else ""
lang = '' lang = ""
if code.startswith('$') or code.startswith('PS') or code.lower().startswith('powershell'): if (
lang = 'powershell' code.startswith("$")
elif code.startswith('#') or code.startswith('import') or code.startswith('from'): or code.startswith("PS")
lang = 'python' or code.lower().startswith("powershell")
elif re.match(r'^(select|insert|update|create)\b', code.strip(), re.I): ):
lang = 'sql' lang = "powershell"
elif code.startswith('git') or code.startswith('./') or code.startswith('sudo'): elif (
lang = 'bash' code.startswith("#")
or code.startswith("import")
or code.startswith("from")
):
lang = "python"
elif re.match(r"^(select|insert|update|create)\b", code.strip(), re.I):
lang = "sql"
elif (
code.startswith("git")
or code.startswith("./")
or code.startswith("sudo")
):
lang = "bash"
if lang: if lang:
return f'```{lang}\n{inner}\n```' return f"```{lang}\n{inner}\n```"
return fence return fence
def normalize_file(path: Path): def normalize_file(path: Path):
text = path.read_text(encoding='utf-8') text = path.read_text(encoding="utf-8")
orig = text orig = text
# Trim trailing whitespace and ensure single trailing newline # Trim trailing whitespace and ensure single trailing newline
text = '\n'.join(line.rstrip() for line in text.splitlines()) + '\n' text = "\n".join(line.rstrip() for line in text.splitlines()) + "\n"
# Ensure first non-empty line is H1 # Ensure first non-empty line is H1
lines = text.splitlines() lines = text.splitlines()
for i, ln in enumerate(lines): for i, ln in enumerate(lines):
if ln.strip(): if ln.strip():
if not ln.startswith('#'): if not ln.startswith("#"):
lines[i] = '# ' + ln lines[i] = "# " + ln
break break
text = '\n'.join(lines) + '\n' text = "\n".join(lines) + "\n"
# Add basic code fence languages where missing (simple heuristic) # Add basic code fence languages where missing (simple heuristic)
text = re.sub(r'```\n([\s\S]*?)\n```', add_code_fence_language, text) text = re.sub(r"```\n([\s\S]*?)\n```", add_code_fence_language, text)
if text != orig: if text != orig:
path.write_text(text, encoding='utf-8') path.write_text(text, encoding="utf-8")
return True return True
return False return False
def main(): def main():
changed = [] changed = []
for p in DOCS.rglob('*.md'): for p in DOCS.rglob("*.md"):
if p.is_file(): if p.is_file():
try: try:
if normalize_file(p): if normalize_file(p):
@@ -68,12 +81,12 @@ def main():
except Exception as e: except Exception as e:
print(f"Failed to format {p}: {e}") print(f"Failed to format {p}: {e}")
if changed: if changed:
print('Formatted files:') print("Formatted files:")
for c in changed: for c in changed:
print(' -', c) print(" -", c)
else: else:
print('No formatting changes required.') print("No formatting changes required.")
if __name__ == '__main__': if __name__ == "__main__":
main() main()

View File

@@ -158,4 +158,32 @@ ALTER TABLE capex
ALTER TABLE opex ALTER TABLE opex
DROP COLUMN IF EXISTS currency_code; DROP COLUMN IF EXISTS currency_code;
-- Role-based access control tables
CREATE TABLE IF NOT EXISTS roles (
id SERIAL PRIMARY KEY,
name VARCHAR(255) UNIQUE NOT NULL
);
CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
username VARCHAR(255) UNIQUE NOT NULL,
email VARCHAR(255) UNIQUE NOT NULL,
hashed_password VARCHAR(255) NOT NULL,
role_id INTEGER NOT NULL REFERENCES roles (id) ON DELETE RESTRICT
);
CREATE INDEX IF NOT EXISTS ix_users_username ON users (username);
CREATE INDEX IF NOT EXISTS ix_users_email ON users (email);
-- Theme settings configuration table
CREATE TABLE IF NOT EXISTS theme_settings (
id SERIAL PRIMARY KEY,
theme_name VARCHAR(255) UNIQUE NOT NULL,
primary_color VARCHAR(7) NOT NULL,
secondary_color VARCHAR(7) NOT NULL,
accent_color VARCHAR(7) NOT NULL,
background_color VARCHAR(7) NOT NULL,
text_color VARCHAR(7) NOT NULL
);
COMMIT; COMMIT;

View File

@@ -1,25 +0,0 @@
-- Migration: Create application_setting table for configurable application options
-- Date: 2025-10-25
-- Description: Introduces persistent storage for application-level settings such as theme colors.
BEGIN;
CREATE TABLE IF NOT EXISTS application_setting (
id SERIAL PRIMARY KEY,
key VARCHAR(128) NOT NULL UNIQUE,
value TEXT NOT NULL,
value_type VARCHAR(32) NOT NULL DEFAULT 'string',
category VARCHAR(32) NOT NULL DEFAULT 'general',
description TEXT,
is_editable BOOLEAN NOT NULL DEFAULT TRUE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE UNIQUE INDEX IF NOT EXISTS ux_application_setting_key
ON application_setting (key);
CREATE INDEX IF NOT EXISTS ix_application_setting_category
ON application_setting (category);
COMMIT;

View File

@@ -16,8 +16,7 @@ from __future__ import annotations
import argparse import argparse
import logging import logging
import os from typing import Optional
from typing import Iterable, Optional
import psycopg2 import psycopg2
from psycopg2 import errors from psycopg2 import errors
@@ -47,22 +46,82 @@ MEASUREMENT_UNIT_SEEDS = (
("kilowatt_hours", "Kilowatt Hours", "kWh", "energy", True), ("kilowatt_hours", "Kilowatt Hours", "kWh", "energy", True),
) )
THEME_SETTING_SEEDS = (
("--color-background", "#f4f5f7", "color",
"theme", "CSS variable --color-background", True),
("--color-surface", "#ffffff", "color",
"theme", "CSS variable --color-surface", True),
("--color-text-primary", "#2a1f33", "color",
"theme", "CSS variable --color-text-primary", True),
("--color-text-secondary", "#624769", "color",
"theme", "CSS variable --color-text-secondary", True),
("--color-text-muted", "#64748b", "color",
"theme", "CSS variable --color-text-muted", True),
("--color-text-subtle", "#94a3b8", "color",
"theme", "CSS variable --color-text-subtle", True),
("--color-text-invert", "#ffffff", "color",
"theme", "CSS variable --color-text-invert", True),
("--color-text-dark", "#0f172a", "color",
"theme", "CSS variable --color-text-dark", True),
("--color-text-strong", "#111827", "color",
"theme", "CSS variable --color-text-strong", True),
("--color-primary", "#5f320d", "color",
"theme", "CSS variable --color-primary", True),
("--color-primary-strong", "#7e4c13", "color",
"theme", "CSS variable --color-primary-strong", True),
("--color-primary-stronger", "#837c15", "color",
"theme", "CSS variable --color-primary-stronger", True),
("--color-accent", "#bff838", "color",
"theme", "CSS variable --color-accent", True),
("--color-border", "#e2e8f0", "color",
"theme", "CSS variable --color-border", True),
("--color-border-strong", "#cbd5e1", "color",
"theme", "CSS variable --color-border-strong", True),
("--color-highlight", "#eef2ff", "color",
"theme", "CSS variable --color-highlight", True),
("--color-panel-shadow", "rgba(15, 23, 42, 0.08)", "color",
"theme", "CSS variable --color-panel-shadow", True),
("--color-panel-shadow-deep", "rgba(15, 23, 42, 0.12)", "color",
"theme", "CSS variable --color-panel-shadow-deep", True),
("--color-surface-alt", "#f8fafc", "color",
"theme", "CSS variable --color-surface-alt", True),
("--color-success", "#047857", "color",
"theme", "CSS variable --color-success", True),
("--color-error", "#b91c1c", "color",
"theme", "CSS variable --color-error", True),
)
def parse_args() -> argparse.Namespace: def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(description="Seed baseline CalMiner data") parser = argparse.ArgumentParser(description="Seed baseline CalMiner data")
parser.add_argument("--currencies", action="store_true", help="Seed currency table")
parser.add_argument("--units", action="store_true", help="Seed unit table")
parser.add_argument("--defaults", action="store_true", help="Seed default records")
parser.add_argument("--dry-run", action="store_true", help="Print actions without executing")
parser.add_argument( parser.add_argument(
"--verbose", "-v", action="count", default=0, help="Increase logging verbosity" "--currencies", action="store_true", help="Seed currency table"
)
parser.add_argument("--units", action="store_true", help="Seed unit table")
parser.add_argument(
"--theme", action="store_true", help="Seed theme settings"
)
parser.add_argument(
"--defaults", action="store_true", help="Seed default records"
)
parser.add_argument(
"--dry-run", action="store_true", help="Print actions without executing"
)
parser.add_argument(
"--verbose",
"-v",
action="count",
default=0,
help="Increase logging verbosity",
) )
return parser.parse_args() return parser.parse_args()
def _configure_logging(args: argparse.Namespace) -> None: def _configure_logging(args: argparse.Namespace) -> None:
level = logging.WARNING - (10 * min(args.verbose, 2)) level = logging.WARNING - (10 * min(args.verbose, 2))
logging.basicConfig(level=max(level, logging.INFO), format="%(levelname)s %(message)s") logging.basicConfig(
level=max(level, logging.INFO), format="%(levelname)s %(message)s"
)
def main() -> None: def main() -> None:
@@ -75,22 +134,36 @@ def run_with_namespace(
*, *,
config: Optional[DatabaseConfig] = None, config: Optional[DatabaseConfig] = None,
) -> None: ) -> None:
if not hasattr(args, "verbose"):
args.verbose = 0
if not hasattr(args, "dry_run"):
args.dry_run = False
_configure_logging(args) _configure_logging(args)
if not any((args.currencies, args.units, args.defaults)): currencies = bool(getattr(args, "currencies", False))
units = bool(getattr(args, "units", False))
theme = bool(getattr(args, "theme", False))
defaults = bool(getattr(args, "defaults", False))
dry_run = bool(getattr(args, "dry_run", False))
if not any((currencies, units, theme, defaults)):
logger.info("No seeding options provided; exiting") logger.info("No seeding options provided; exiting")
return return
config = config or DatabaseConfig.from_env() config = config or DatabaseConfig.from_env()
with psycopg2.connect(config.application_dsn()) as conn: with psycopg2.connect(config.application_dsn()) as conn:
conn.autocommit = True conn.autocommit = True
with conn.cursor() as cursor: with conn.cursor() as cursor:
if args.currencies: if currencies:
_seed_currencies(cursor, dry_run=args.dry_run) _seed_currencies(cursor, dry_run=dry_run)
if args.units: if units:
_seed_units(cursor, dry_run=args.dry_run) _seed_units(cursor, dry_run=dry_run)
if args.defaults: if theme:
_seed_defaults(cursor, dry_run=args.dry_run) _seed_theme(cursor, dry_run=dry_run)
if defaults:
_seed_defaults(cursor, dry_run=dry_run)
def _seed_currencies(cursor, *, dry_run: bool) -> None: def _seed_currencies(cursor, *, dry_run: bool) -> None:
@@ -152,11 +225,44 @@ def _seed_units(cursor, *, dry_run: bool) -> None:
logger.info("Measurement unit seed complete") logger.info("Measurement unit seed complete")
def _seed_defaults(cursor, *, dry_run: bool) -> None: def _seed_theme(cursor, *, dry_run: bool) -> None:
logger.info("Seeding default records - not yet implemented") logger.info("Seeding theme settings (%d rows)", len(THEME_SETTING_SEEDS))
if dry_run: if dry_run:
for key, value, _, _, _, _ in THEME_SETTING_SEEDS:
logger.info(
"Dry run: would upsert theme setting %s = %s", key, value)
return return
try:
execute_values(
cursor,
"""
INSERT INTO application_setting (key, value, value_type, category, description, is_editable)
VALUES %s
ON CONFLICT (key) DO UPDATE
SET value = EXCLUDED.value,
value_type = EXCLUDED.value_type,
category = EXCLUDED.category,
description = EXCLUDED.description,
is_editable = EXCLUDED.is_editable
""",
THEME_SETTING_SEEDS,
)
except errors.UndefinedTable:
logger.warning(
"application_setting table does not exist; skipping theme seeding."
)
cursor.connection.rollback()
return
logger.info("Theme settings seed complete")
def _seed_defaults(cursor, *, dry_run: bool) -> None:
logger.info("Seeding default records")
_seed_theme(cursor, dry_run=dry_run)
logger.info("Default records seed complete")
if __name__ == "__main__": if __name__ == "__main__":
main() main()

View File

@@ -22,6 +22,7 @@ connection string; this script will still honor the granular inputs above.
""" """
from __future__ import annotations from __future__ import annotations
from config.database import Base
import argparse import argparse
import importlib import importlib
import logging import logging
@@ -39,10 +40,10 @@ from psycopg2 import extensions
from psycopg2.extensions import connection as PGConnection, parse_dsn from psycopg2.extensions import connection as PGConnection, parse_dsn
from dotenv import load_dotenv from dotenv import load_dotenv
from sqlalchemy import create_engine, inspect from sqlalchemy import create_engine, inspect
ROOT_DIR = Path(__file__).resolve().parents[1] ROOT_DIR = Path(__file__).resolve().parents[1]
if str(ROOT_DIR) not in sys.path: if str(ROOT_DIR) not in sys.path:
sys.path.insert(0, str(ROOT_DIR)) sys.path.insert(0, str(ROOT_DIR))
from config.database import Base
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -208,12 +209,17 @@ class DatabaseConfig:
class DatabaseSetup: class DatabaseSetup:
"""Encapsulates the full setup workflow.""" """Encapsulates the full setup workflow."""
def __init__(self, config: DatabaseConfig, *, dry_run: bool = False) -> None: def __init__(
self, config: DatabaseConfig, *, dry_run: bool = False
) -> None:
self.config = config self.config = config
self.dry_run = dry_run self.dry_run = dry_run
self._models_loaded = False self._models_loaded = False
self._rollback_actions: list[tuple[str, Callable[[], None]]] = [] self._rollback_actions: list[tuple[str, Callable[[], None]]] = []
def _register_rollback(self, label: str, action: Callable[[], None]) -> None:
def _register_rollback(
self, label: str, action: Callable[[], None]
) -> None:
if self.dry_run: if self.dry_run:
return return
self._rollback_actions.append((label, action)) self._rollback_actions.append((label, action))
@@ -237,7 +243,6 @@ class DatabaseSetup:
def clear_rollbacks(self) -> None: def clear_rollbacks(self) -> None:
self._rollback_actions.clear() self._rollback_actions.clear()
def _describe_connection(self, user: str, database: str) -> str: def _describe_connection(self, user: str, database: str) -> str:
return f"{user}@{self.config.host}:{self.config.port}/{database}" return f"{user}@{self.config.host}:{self.config.port}/{database}"
@@ -245,7 +250,7 @@ class DatabaseSetup:
descriptor = self._describe_connection( descriptor = self._describe_connection(
self.config.admin_user, self.config.admin_database self.config.admin_user, self.config.admin_database
) )
logger.info("Validating admin connection (%s)", descriptor) logger.info("[CONNECT] Validating admin connection (%s)", descriptor)
try: try:
with self._admin_connection(self.config.admin_database) as conn: with self._admin_connection(self.config.admin_database) as conn:
with conn.cursor() as cursor: with conn.cursor() as cursor:
@@ -256,13 +261,14 @@ class DatabaseSetup:
"Check DATABASE_ADMIN_URL or DATABASE_SUPERUSER settings." "Check DATABASE_ADMIN_URL or DATABASE_SUPERUSER settings."
f" Target: {descriptor}" f" Target: {descriptor}"
) from exc ) from exc
logger.info("Admin connection verified (%s)", descriptor) logger.info("[CONNECT] Admin connection verified (%s)", descriptor)
def validate_application_connection(self) -> None: def validate_application_connection(self) -> None:
descriptor = self._describe_connection( descriptor = self._describe_connection(
self.config.user, self.config.database self.config.user, self.config.database
) )
logger.info("Validating application connection (%s)", descriptor) logger.info(
"[CONNECT] Validating application connection (%s)", descriptor)
try: try:
with self._application_connection() as conn: with self._application_connection() as conn:
with conn.cursor() as cursor: with conn.cursor() as cursor:
@@ -273,7 +279,8 @@ class DatabaseSetup:
"Ensure the role exists and credentials are correct. " "Ensure the role exists and credentials are correct. "
f"Target: {descriptor}" f"Target: {descriptor}"
) from exc ) from exc
logger.info("Application connection verified (%s)", descriptor) logger.info(
"[CONNECT] Application connection verified (%s)", descriptor)
def ensure_database(self) -> None: def ensure_database(self) -> None:
"""Create the target database when it does not already exist.""" """Create the target database when it does not already exist."""
@@ -336,7 +343,8 @@ class DatabaseSetup:
rollback_label = f"drop database {self.config.database}" rollback_label = f"drop database {self.config.database}"
self._register_rollback( self._register_rollback(
rollback_label, rollback_label,
lambda db=self.config.database: self._drop_database(db), lambda db=self.config.database: self._drop_database(
db),
) )
logger.info("Created database '%s'", self.config.database) logger.info("Created database '%s'", self.config.database)
finally: finally:
@@ -384,9 +392,9 @@ class DatabaseSetup:
try: try:
if self.config.password: if self.config.password:
cursor.execute( cursor.execute(
sql.SQL("CREATE ROLE {} WITH LOGIN PASSWORD %s").format( sql.SQL(
sql.Identifier(self.config.user) "CREATE ROLE {} WITH LOGIN PASSWORD %s"
), ).format(sql.Identifier(self.config.user)),
(self.config.password,), (self.config.password,),
) )
else: else:
@@ -405,7 +413,8 @@ class DatabaseSetup:
rollback_label = f"drop role {self.config.user}" rollback_label = f"drop role {self.config.user}"
self._register_rollback( self._register_rollback(
rollback_label, rollback_label,
lambda role=self.config.user: self._drop_role(role), lambda role=self.config.user: self._drop_role(
role),
) )
else: else:
logger.info("Role '%s' already present", self.config.user) logger.info("Role '%s' already present", self.config.user)
@@ -579,32 +588,28 @@ class DatabaseSetup:
except RuntimeError: except RuntimeError:
raise raise
def _connect(self, dsn: str, descriptor: str) -> PGConnection:
try:
return psycopg2.connect(dsn)
except psycopg2.Error as exc:
raise RuntimeError(
f"Unable to establish connection. Target: {descriptor}"
) from exc
def _admin_connection(self, database: Optional[str] = None) -> PGConnection: def _admin_connection(self, database: Optional[str] = None) -> PGConnection:
target_db = database or self.config.admin_database target_db = database or self.config.admin_database
dsn = self.config.admin_dsn(database) dsn = self.config.admin_dsn(database)
descriptor = self._describe_connection( descriptor = self._describe_connection(
self.config.admin_user, target_db self.config.admin_user, target_db
) )
try: return self._connect(dsn, descriptor)
return psycopg2.connect(dsn)
except psycopg2.Error as exc:
raise RuntimeError(
"Unable to establish admin connection. "
f"Target: {descriptor}"
) from exc
def _application_connection(self) -> PGConnection: def _application_connection(self) -> PGConnection:
dsn = self.config.application_dsn() dsn = self.config.application_dsn()
descriptor = self._describe_connection( descriptor = self._describe_connection(
self.config.user, self.config.database self.config.user, self.config.database
) )
try: return self._connect(dsn, descriptor)
return psycopg2.connect(dsn)
except psycopg2.Error as exc:
raise RuntimeError(
"Unable to establish application connection. "
f"Target: {descriptor}"
) from exc
def initialize_schema(self) -> None: def initialize_schema(self) -> None:
"""Create database objects from SQLAlchemy metadata if missing.""" """Create database objects from SQLAlchemy metadata if missing."""
@@ -645,7 +650,9 @@ class DatabaseSetup:
importlib.import_module(f"{package.__name__}.{module_info.name}") importlib.import_module(f"{package.__name__}.{module_info.name}")
self._models_loaded = True self._models_loaded = True
def run_migrations(self, migrations_dir: Optional[Path | str] = None) -> None: def run_migrations(
self, migrations_dir: Optional[Path | str] = None
) -> None:
"""Execute pending SQL migrations in chronological order.""" """Execute pending SQL migrations in chronological order."""
directory = ( directory = (
@@ -673,7 +680,8 @@ class DatabaseSetup:
conn.autocommit = True conn.autocommit = True
with conn.cursor() as cursor: with conn.cursor() as cursor:
table_exists = self._migrations_table_exists( table_exists = self._migrations_table_exists(
cursor, schema_name) cursor, schema_name
)
if not table_exists: if not table_exists:
if self.dry_run: if self.dry_run:
logger.info( logger.info(
@@ -692,73 +700,15 @@ class DatabaseSetup:
applied = set() applied = set()
else: else:
applied = self._fetch_applied_migrations( applied = self._fetch_applied_migrations(
cursor, schema_name) cursor, schema_name
)
if ( self._handle_baseline_migration(
baseline_path.exists() cursor, schema_name, baseline_path, baseline_name, migration_files, applied
and baseline_name not in applied )
):
if self.dry_run:
logger.info(
"Dry run: baseline migration '%s' pending; would apply and mark legacy files",
baseline_name,
)
else:
logger.info(
"Baseline migration '%s' pending; applying and marking older migrations",
baseline_name,
)
try:
baseline_applied = self._apply_migration_file(
cursor, schema_name, baseline_path
)
except Exception:
logger.error(
"Failed while applying baseline migration '%s'."
" Review the migration contents and rerun with --dry-run for diagnostics.",
baseline_name,
exc_info=True,
)
raise
applied.add(baseline_applied)
legacy_files = [
path
for path in migration_files
if path.name != baseline_name
]
for legacy in legacy_files:
if legacy.name not in applied:
try:
cursor.execute(
sql.SQL(
"INSERT INTO {} (filename, applied_at) VALUES (%s, NOW())"
).format(
sql.Identifier(
schema_name,
MIGRATIONS_TABLE,
)
),
(legacy.name,),
)
except Exception:
logger.error(
"Unable to record legacy migration '%s' after baseline application."
" Check schema_migrations table in schema '%s' for partial state.",
legacy.name,
schema_name,
exc_info=True,
)
raise
applied.add(legacy.name)
logger.info(
"Marked legacy migration '%s' as applied via baseline",
legacy.name,
)
pending = [ pending = [
path path for path in migration_files if path.name not in applied
for path in migration_files
if path.name not in applied
] ]
if not pending: if not pending:
@@ -779,6 +729,85 @@ class DatabaseSetup:
logger.info("Applied %d migrations", len(pending)) logger.info("Applied %d migrations", len(pending))
def _handle_baseline_migration(
self,
cursor: extensions.cursor,
schema_name: str,
baseline_path: Path,
baseline_name: str,
migration_files: list[Path],
applied: set[str],
) -> None:
if baseline_path.exists() and baseline_name not in applied:
if self.dry_run:
logger.info(
"Dry run: baseline migration '%s' pending; would apply and mark legacy files",
baseline_name,
)
else:
logger.info(
"[MIGRATE] Baseline migration '%s' pending; applying and marking older migrations",
baseline_name,
)
try:
baseline_applied = self._apply_migration_file(
cursor, schema_name, baseline_path
)
except Exception:
logger.error(
"Failed while applying baseline migration '%s'."
" Review the migration contents and rerun with --dry-run for diagnostics.",
baseline_name,
exc_info=True,
)
raise
applied.add(baseline_applied)
self._mark_legacy_migrations_as_applied(
cursor, schema_name, migration_files, baseline_name, applied
)
def _mark_legacy_migrations_as_applied(
self,
cursor: extensions.cursor,
schema_name: str,
migration_files: list[Path],
baseline_name: str,
applied: set[str],
) -> None:
legacy_files = [
path
for path in migration_files
if path.name != baseline_name
]
for legacy in legacy_files:
if legacy.name not in applied:
try:
cursor.execute(
sql.SQL(
"INSERT INTO {} (filename, applied_at) VALUES (%s, NOW())"
).format(
sql.Identifier(
schema_name,
MIGRATIONS_TABLE,
)
),
(legacy.name,),
)
except Exception:
logger.error(
"Unable to record legacy migration '%s' after baseline application."
" Check schema_migrations table in schema '%s' for partial state.",
legacy.name,
schema_name,
exc_info=True,
)
raise
applied.add(legacy.name)
logger.info(
"Marked legacy migration '%s' as applied via baseline",
legacy.name,
)
def _apply_migration_file( def _apply_migration_file(
self, self,
cursor, cursor,
@@ -792,9 +821,7 @@ class DatabaseSetup:
cursor.execute( cursor.execute(
sql.SQL( sql.SQL(
"INSERT INTO {} (filename, applied_at) VALUES (%s, NOW())" "INSERT INTO {} (filename, applied_at) VALUES (%s, NOW())"
).format( ).format(sql.Identifier(schema_name, MIGRATIONS_TABLE)),
sql.Identifier(schema_name, MIGRATIONS_TABLE)
),
(path.name,), (path.name,),
) )
return path.name return path.name
@@ -820,9 +847,7 @@ class DatabaseSetup:
"filename TEXT PRIMARY KEY," "filename TEXT PRIMARY KEY,"
"applied_at TIMESTAMPTZ NOT NULL DEFAULT NOW()" "applied_at TIMESTAMPTZ NOT NULL DEFAULT NOW()"
")" ")"
).format( ).format(sql.Identifier(schema_name, MIGRATIONS_TABLE))
sql.Identifier(schema_name, MIGRATIONS_TABLE)
)
) )
def _fetch_applied_migrations(self, cursor, schema_name: str) -> set[str]: def _fetch_applied_migrations(self, cursor, schema_name: str) -> set[str]:
@@ -841,14 +866,23 @@ class DatabaseSetup:
seed_args = argparse.Namespace( seed_args = argparse.Namespace(
currencies=True, currencies=True,
units=True, units=True,
theme=True,
defaults=False, defaults=False,
dry_run=dry_run, dry_run=dry_run,
verbose=0, verbose=0,
) )
seed_data.run_with_namespace(seed_args, config=self.config) try:
seed_data.run_with_namespace(seed_args, config=self.config)
except Exception:
logger.error(
"[SEED] Failed during baseline data seeding. "
"Review seed_data.py and rerun with --dry-run for diagnostics.",
exc_info=True,
)
raise
if dry_run: if dry_run:
logger.info("Dry run: skipped seed verification") logger.info("[SEED] Dry run: skipped seed verification")
return return
expected_currencies = { expected_currencies = {
@@ -894,7 +928,7 @@ class DatabaseSetup:
raise RuntimeError(message) raise RuntimeError(message)
logger.info( logger.info(
"Verified %d seeded currencies present", "[VERIFY] Verified %d seeded currencies present",
len(found_codes), len(found_codes),
) )
@@ -916,7 +950,8 @@ class DatabaseSetup:
logger.error(message) logger.error(message)
raise RuntimeError(message) raise RuntimeError(message)
else: else:
logger.info("Verified default currency 'USD' active") logger.info(
"[VERIFY] Verified default currency 'USD' active")
if expected_unit_codes: if expected_unit_codes:
try: try:
@@ -974,7 +1009,7 @@ class DatabaseSetup:
(database,), (database,),
) )
cursor.execute( cursor.execute(
sql.SQL("DROP DATABASE IF EXISTS {}" ).format( sql.SQL("DROP DATABASE IF EXISTS {}").format(
sql.Identifier(database) sql.Identifier(database)
) )
) )
@@ -985,7 +1020,7 @@ class DatabaseSetup:
conn.autocommit = True conn.autocommit = True
with conn.cursor() as cursor: with conn.cursor() as cursor:
cursor.execute( cursor.execute(
sql.SQL("DROP ROLE IF EXISTS {}" ).format( sql.SQL("DROP ROLE IF EXISTS {}").format(
sql.Identifier(role) sql.Identifier(role)
) )
) )
@@ -1000,27 +1035,35 @@ class DatabaseSetup:
conn.autocommit = True conn.autocommit = True
with conn.cursor() as cursor: with conn.cursor() as cursor:
cursor.execute( cursor.execute(
sql.SQL("REVOKE ALL PRIVILEGES ON ALL TABLES IN SCHEMA {} FROM {}" ).format( sql.SQL(
"REVOKE ALL PRIVILEGES ON ALL TABLES IN SCHEMA {} FROM {}"
).format(
sql.Identifier(schema_name), sql.Identifier(schema_name),
sql.Identifier(self.config.user) sql.Identifier(self.config.user),
) )
) )
cursor.execute( cursor.execute(
sql.SQL("REVOKE ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA {} FROM {}" ).format( sql.SQL(
"REVOKE ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA {} FROM {}"
).format(
sql.Identifier(schema_name), sql.Identifier(schema_name),
sql.Identifier(self.config.user) sql.Identifier(self.config.user),
) )
) )
cursor.execute( cursor.execute(
sql.SQL("ALTER DEFAULT PRIVILEGES IN SCHEMA {} REVOKE SELECT, INSERT, UPDATE, DELETE ON TABLES FROM {}" ).format( sql.SQL(
"ALTER DEFAULT PRIVILEGES IN SCHEMA {} REVOKE SELECT, INSERT, UPDATE, DELETE ON TABLES FROM {}"
).format(
sql.Identifier(schema_name), sql.Identifier(schema_name),
sql.Identifier(self.config.user) sql.Identifier(self.config.user),
) )
) )
cursor.execute( cursor.execute(
sql.SQL("ALTER DEFAULT PRIVILEGES IN SCHEMA {} REVOKE USAGE, SELECT ON SEQUENCES FROM {}" ).format( sql.SQL(
"ALTER DEFAULT PRIVILEGES IN SCHEMA {} REVOKE USAGE, SELECT ON SEQUENCES FROM {}"
).format(
sql.Identifier(schema_name), sql.Identifier(schema_name),
sql.Identifier(self.config.user) sql.Identifier(self.config.user),
) )
) )
@@ -1064,19 +1107,18 @@ def parse_args() -> argparse.Namespace:
) )
parser.add_argument("--db-driver", help="Override DATABASE_DRIVER") parser.add_argument("--db-driver", help="Override DATABASE_DRIVER")
parser.add_argument("--db-host", help="Override DATABASE_HOST") parser.add_argument("--db-host", help="Override DATABASE_HOST")
parser.add_argument("--db-port", type=int, parser.add_argument("--db-port", type=int, help="Override DATABASE_PORT")
help="Override DATABASE_PORT")
parser.add_argument("--db-name", help="Override DATABASE_NAME") parser.add_argument("--db-name", help="Override DATABASE_NAME")
parser.add_argument("--db-user", help="Override DATABASE_USER") parser.add_argument("--db-user", help="Override DATABASE_USER")
parser.add_argument( parser.add_argument("--db-password", help="Override DATABASE_PASSWORD")
"--db-password", help="Override DATABASE_PASSWORD")
parser.add_argument("--db-schema", help="Override DATABASE_SCHEMA") parser.add_argument("--db-schema", help="Override DATABASE_SCHEMA")
parser.add_argument( parser.add_argument(
"--admin-url", "--admin-url",
help="Override DATABASE_ADMIN_URL for administrative operations", help="Override DATABASE_ADMIN_URL for administrative operations",
) )
parser.add_argument( parser.add_argument(
"--admin-user", help="Override DATABASE_SUPERUSER for admin ops") "--admin-user", help="Override DATABASE_SUPERUSER for admin ops"
)
parser.add_argument( parser.add_argument(
"--admin-password", "--admin-password",
help="Override DATABASE_SUPERUSER_PASSWORD for admin ops", help="Override DATABASE_SUPERUSER_PASSWORD for admin ops",
@@ -1091,7 +1133,11 @@ def parse_args() -> argparse.Namespace:
help="Log actions without applying changes.", help="Log actions without applying changes.",
) )
parser.add_argument( parser.add_argument(
"--verbose", "-v", action="count", default=0, help="Increase logging verbosity" "--verbose",
"-v",
action="count",
default=0,
help="Increase logging verbosity",
) )
return parser.parse_args() return parser.parse_args()
@@ -1099,8 +1145,9 @@ def parse_args() -> argparse.Namespace:
def main() -> None: def main() -> None:
args = parse_args() args = parse_args()
level = logging.WARNING - (10 * min(args.verbose, 2)) level = logging.WARNING - (10 * min(args.verbose, 2))
logging.basicConfig(level=max(level, logging.INFO), logging.basicConfig(
format="%(levelname)s %(message)s") level=max(level, logging.INFO), format="%(levelname)s %(message)s"
)
override_args: dict[str, Optional[str]] = { override_args: dict[str, Optional[str]] = {
"DATABASE_DRIVER": args.db_driver, "DATABASE_DRIVER": args.db_driver,
@@ -1120,7 +1167,9 @@ def main() -> None:
config = DatabaseConfig.from_env(overrides=override_args) config = DatabaseConfig.from_env(overrides=override_args)
setup = DatabaseSetup(config, dry_run=args.dry_run) setup = DatabaseSetup(config, dry_run=args.dry_run)
admin_tasks_requested = args.ensure_database or args.ensure_role or args.ensure_schema admin_tasks_requested = (
args.ensure_database or args.ensure_role or args.ensure_schema
)
if admin_tasks_requested: if admin_tasks_requested:
setup.validate_admin_connection() setup.validate_admin_connection()
@@ -1145,9 +1194,7 @@ def main() -> None:
auto_run_migrations_reason: Optional[str] = None auto_run_migrations_reason: Optional[str] = None
if args.seed_data and not should_run_migrations: if args.seed_data and not should_run_migrations:
should_run_migrations = True should_run_migrations = True
auto_run_migrations_reason = ( auto_run_migrations_reason = "Seed data requested without explicit --run-migrations; applying migrations first."
"Seed data requested without explicit --run-migrations; applying migrations first."
)
try: try:
if args.ensure_database: if args.ensure_database:
@@ -1167,9 +1214,7 @@ def main() -> None:
if auto_run_migrations_reason: if auto_run_migrations_reason:
logger.info(auto_run_migrations_reason) logger.info(auto_run_migrations_reason)
migrations_path = ( migrations_path = (
Path(args.migrations_dir) Path(args.migrations_dir) if args.migrations_dir else None
if args.migrations_dir
else None
) )
setup.run_migrations(migrations_path) setup.run_migrations(migrations_path)
if args.seed_data: if args.seed_data:

View File

@@ -27,7 +27,9 @@ def _percentile(values: List[float], percentile: float) -> float:
return sorted_values[lower] * (1 - weight) + sorted_values[upper] * weight return sorted_values[lower] * (1 - weight) + sorted_values[upper] * weight
def generate_report(simulation_results: List[Dict[str, float]]) -> Dict[str, Union[float, int]]: def generate_report(
simulation_results: List[Dict[str, float]],
) -> Dict[str, Union[float, int]]:
"""Aggregate basic statistics for simulation outputs.""" """Aggregate basic statistics for simulation outputs."""
values = _extract_results(simulation_results) values = _extract_results(simulation_results)
@@ -63,7 +65,7 @@ def generate_report(simulation_results: List[Dict[str, float]]) -> Dict[str, Uni
std_dev = pstdev(values) if len(values) > 1 else 0.0 std_dev = pstdev(values) if len(values) > 1 else 0.0
summary["std_dev"] = std_dev summary["std_dev"] = std_dev
summary["variance"] = std_dev ** 2 summary["variance"] = std_dev**2
var_95 = summary["percentile_5"] var_95 = summary["percentile_5"]
summary["value_at_risk_95"] = var_95 summary["value_at_risk_95"] = var_95

59
services/security.py Normal file
View File

@@ -0,0 +1,59 @@
from datetime import datetime, timedelta
from typing import Any, Union
from fastapi import HTTPException, status, Depends
from fastapi.security import OAuth2PasswordBearer
from jose import jwt, JWTError
from passlib.context import CryptContext
from sqlalchemy.orm import Session
from config.database import get_db
ACCESS_TOKEN_EXPIRE_MINUTES = 30
SECRET_KEY = "your-secret-key" # Change this in production
ALGORITHM = "HS256"
pwd_context = CryptContext(schemes=["pbkdf2_sha256"], deprecated="auto")
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="users/login")
def create_access_token(
subject: Union[str, Any], expires_delta: Union[timedelta, None] = None
) -> str:
if expires_delta:
expire = datetime.utcnow() + expires_delta
else:
expire = datetime.utcnow() + timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
to_encode = {"exp": expire, "sub": str(subject)}
encoded_jwt = jwt.encode(to_encode, SECRET_KEY, algorithm=ALGORITHM)
return encoded_jwt
def verify_password(plain_password: str, hashed_password: str) -> bool:
return pwd_context.verify(plain_password, hashed_password)
def get_password_hash(password: str) -> str:
return pwd_context.hash(password)
async def get_current_user(token: str = Depends(oauth2_scheme), db: Session = Depends(get_db)):
from models.user import User
credentials_exception = HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Could not validate credentials",
headers={"WWW-Authenticate": "Bearer"},
)
try:
payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
username = payload.get("sub")
if username is None:
raise credentials_exception
except JWTError:
raise credentials_exception
user = db.query(User).filter(User.username == username).first()
if user is None:
raise credentials_exception
return user

View File

@@ -7,6 +7,7 @@ from typing import Dict, Mapping
from sqlalchemy.orm import Session from sqlalchemy.orm import Session
from models.application_setting import ApplicationSetting from models.application_setting import ApplicationSetting
from models.theme_setting import ThemeSetting # Import ThemeSetting model
CSS_COLOR_CATEGORY = "theme" CSS_COLOR_CATEGORY = "theme"
CSS_COLOR_VALUE_TYPE = "color" CSS_COLOR_VALUE_TYPE = "color"
@@ -92,7 +93,9 @@ def get_css_color_settings(db: Session) -> Dict[str, str]:
return values return values
def update_css_color_settings(db: Session, updates: Mapping[str, str]) -> Dict[str, str]: def update_css_color_settings(
db: Session, updates: Mapping[str, str]
) -> Dict[str, str]:
"""Persist provided CSS color overrides and return the final values.""" """Persist provided CSS color overrides and return the final values."""
if not updates: if not updates:
@@ -176,8 +179,10 @@ def _validate_functional_color(value: str) -> None:
def _ensure_component_count(value: str, expected: int) -> None: def _ensure_component_count(value: str, expected: int) -> None:
if not value.endswith(")"): if not value.endswith(")"):
raise ValueError("Color function expressions must end with a closing parenthesis") raise ValueError(
inner = value[value.index("(") + 1 : -1] "Color function expressions must end with a closing parenthesis"
)
inner = value[value.index("(") + 1: -1]
parts = [segment.strip() for segment in inner.split(",")] parts = [segment.strip() for segment in inner.split(",")]
if len(parts) != expected: if len(parts) != expected:
raise ValueError( raise ValueError(
@@ -206,3 +211,20 @@ def list_css_env_override_rows(
} }
) )
return rows return rows
def save_theme_settings(db: Session, theme_data: dict):
theme = db.query(ThemeSetting).first() or ThemeSetting()
for key, value in theme_data.items():
setattr(theme, key, value)
db.add(theme)
db.commit()
db.refresh(theme)
return theme
def get_theme_settings(db: Session):
theme = db.query(ThemeSetting).first()
if theme:
return {c.name: getattr(theme, c.name) for c in theme.__table__.columns}
return {}

View File

@@ -25,12 +25,13 @@ def _ensure_positive_span(span: float, fallback: float) -> float:
return span if span and span > 0 else fallback return span if span and span > 0 else fallback
def _compile_parameters(parameters: Sequence[Dict[str, float]]) -> List[SimulationParameter]: def _compile_parameters(
parameters: Sequence[Dict[str, float]],
) -> List[SimulationParameter]:
compiled: List[SimulationParameter] = [] compiled: List[SimulationParameter] = []
for index, item in enumerate(parameters): for index, item in enumerate(parameters):
if "value" not in item: if "value" not in item:
raise ValueError( raise ValueError(f"Parameter at index {index} must include 'value'")
f"Parameter at index {index} must include 'value'")
name = str(item.get("name", f"param_{index}")) name = str(item.get("name", f"param_{index}"))
base_value = float(item["value"]) base_value = float(item["value"])
distribution = str(item.get("distribution", "normal")).lower() distribution = str(item.get("distribution", "normal")).lower()
@@ -43,8 +44,11 @@ def _compile_parameters(parameters: Sequence[Dict[str, float]]) -> List[Simulati
if distribution == "normal": if distribution == "normal":
std_dev = item.get("std_dev") std_dev = item.get("std_dev")
std_dev_value = float(std_dev) if std_dev is not None else abs( std_dev_value = (
base_value) * DEFAULT_STD_DEV_RATIO or 1.0 float(std_dev)
if std_dev is not None
else abs(base_value) * DEFAULT_STD_DEV_RATIO or 1.0
)
compiled.append( compiled.append(
SimulationParameter( SimulationParameter(
name=name, name=name,

View File

@@ -1,25 +1,29 @@
:root { :root {
--color-background: #f4f5f7; --bg: #0b0f14;
--color-surface: #ffffff; --bg-2: #0f141b;
--color-text-primary: #2a1f33; --card: #151b23;
--color-text-secondary: #624769; --text: #e6edf3;
--color-text-muted: #64748b; --muted: #a9b4c0;
--color-text-subtle: #94a3b8; --brand: #f1b21a;
--brand-2: #f6c648;
--brand-3: #f9d475;
--accent: #2ba58f;
--danger: #d14b4b;
--shadow: 0 10px 30px rgba(0, 0, 0, 0.35);
--radius: 14px;
--radius-sm: 10px;
--container: 1180px;
--muted: var(--muted);
--color-text-subtle: rgba(169, 180, 192, 0.6);
--color-text-invert: #ffffff; --color-text-invert: #ffffff;
--color-text-dark: #0f172a; --color-text-dark: #0f172a;
--color-text-strong: #111827; --color-text-strong: #111827;
--color-primary: #5f320d; --color-border: rgba(255, 255, 255, 0.08);
--color-primary-strong: #7e4c13; --color-border-strong: rgba(255, 255, 255, 0.12);
--color-primary-stronger: #837c15; --color-highlight: rgba(241, 178, 26, 0.08);
--color-accent: #bff838; --color-panel-shadow: rgba(0, 0, 0, 0.25);
--color-border: #e2e8f0; --color-panel-shadow-deep: rgba(0, 0, 0, 0.35);
--color-border-strong: #cbd5e1; --color-surface-alt: rgba(21, 27, 35, 0.7);
--color-highlight: #eef2ff;
--color-panel-shadow: rgba(15, 23, 42, 0.08);
--color-panel-shadow-deep: rgba(15, 23, 42, 0.12);
--color-surface-alt: #f8fafc;
--color-success: #047857;
--color-error: #b91c1c;
--space-2xs: 0.25rem; --space-2xs: 0.25rem;
--space-xs: 0.5rem; --space-xs: 0.5rem;
--space-sm: 0.75rem; --space-sm: 0.75rem;
@@ -33,15 +37,30 @@
--font-size-lg: 1.25rem; --font-size-lg: 1.25rem;
--font-size-xl: 1.5rem; --font-size-xl: 1.5rem;
--font-size-2xl: 2rem; --font-size-2xl: 2rem;
--panel-radius: 12px; --panel-radius: var(--radius);
--table-radius: 10px; --table-radius: var(--radius-sm);
}
* {
box-sizing: border-box;
}
html,
body {
height: 100%;
} }
body { body {
margin: 0; margin: 0;
font-family: "Segoe UI", Tahoma, Geneva, Verdana, sans-serif; font-family: ui-sans-serif, system-ui, -apple-system, 'Segoe UI', 'Roboto',
background-color: var(--color-background); Helvetica, Arial, 'Apple Color Emoji', 'Segoe UI Emoji';
color: var(--color-text-primary); color: var(--text);
background: linear-gradient(180deg, var(--bg) 0%, var(--bg-2) 100%);
line-height: 1.45;
}
a {
color: var(--brand);
} }
.app-layout { .app-layout {
@@ -51,7 +70,7 @@ body {
.app-sidebar { .app-sidebar {
width: 264px; width: 264px;
background-color: var(--color-primary); background-color: var(--card);
color: var(--color-text-invert); color: var(--color-text-invert);
display: flex; display: flex;
flex-direction: column; flex-direction: column;
@@ -59,6 +78,7 @@ body {
position: sticky; position: sticky;
top: 0; top: 0;
height: 100vh; height: 100vh;
border-right: 1px solid var(--color-border);
} }
.sidebar-inner { .sidebar-inner {
@@ -82,11 +102,7 @@ body {
width: 44px; width: 44px;
height: 44px; height: 44px;
border-radius: 12px; border-radius: 12px;
background: linear-gradient( background: linear-gradient(0deg, var(--brand-3), var(--accent));
0deg,
var(--color-primary-stronger),
var(--color-accent)
);
color: var(--color-text-invert); color: var(--color-text-invert);
font-weight: 700; font-weight: 700;
font-size: 1.1rem; font-size: 1.1rem;
@@ -207,7 +223,7 @@ body {
} }
.app-main { .app-main {
background-color: var(--color-background); background-color: var(--bg);
display: flex; display: flex;
flex-direction: column; flex-direction: column;
flex: 1; flex: 1;
@@ -240,7 +256,7 @@ body {
.dashboard-subtitle { .dashboard-subtitle {
margin: 0.35rem 0 0; margin: 0.35rem 0 0;
color: var(--color-text-muted); color: var(--muted);
} }
.dashboard-actions { .dashboard-actions {
@@ -259,7 +275,7 @@ body {
.page-subtitle { .page-subtitle {
margin-top: 0.35rem; margin-top: 0.35rem;
color: var(--color-text-muted); color: var(--muted);
font-size: 0.95rem; font-size: 0.95rem;
} }
@@ -271,13 +287,14 @@ body {
} }
.settings-card { .settings-card {
background: var(--color-surface); background: var(--card);
border-radius: 12px; border-radius: var(--radius);
padding: 1.5rem; padding: 1.5rem;
box-shadow: 0 4px 14px var(--color-panel-shadow); box-shadow: var(--shadow);
display: flex; display: flex;
flex-direction: column; flex-direction: column;
gap: 0.75rem; gap: 0.75rem;
border: 1px solid var(--color-border);
} }
.settings-card h2 { .settings-card h2 {
@@ -287,7 +304,7 @@ body {
.settings-card p { .settings-card p {
margin: 0; margin: 0;
color: var(--color-text-muted); color: var(--muted);
} }
.settings-card-note { .settings-card-note {
@@ -311,7 +328,7 @@ body {
.color-form-field.is-env-override { .color-form-field.is-env-override {
background: rgba(191, 248, 56, 0.12); background: rgba(191, 248, 56, 0.12);
border-color: var(--color-accent); border-color: var(--accent);
} }
.color-field-header { .color-field-header {
@@ -319,13 +336,13 @@ body {
justify-content: space-between; justify-content: space-between;
gap: var(--space-sm); gap: var(--space-sm);
font-weight: 600; font-weight: 600;
color: var(--color-text-strong); color: var(--text);
font-family: "Fira Code", "Consolas", "Courier New", monospace; font-family: 'Fira Code', 'Consolas', 'Courier New', monospace;
font-size: 0.85rem; font-size: 0.85rem;
} }
.color-field-default { .color-field-default {
color: var(--color-text-muted); color: var(--muted);
font-weight: 500; font-weight: 500;
} }
@@ -337,7 +354,7 @@ body {
.color-env-flag { .color-env-flag {
font-size: 0.78rem; font-size: 0.78rem;
font-weight: 600; font-weight: 600;
color: var(--color-accent); color: var(--accent);
text-transform: uppercase; text-transform: uppercase;
letter-spacing: 0.04em; letter-spacing: 0.04em;
} }
@@ -349,7 +366,7 @@ body {
} }
.color-value-input { .color-value-input {
font-family: "Fira Code", "Consolas", "Courier New", monospace; font-family: 'Fira Code', 'Consolas', 'Courier New', monospace;
} }
.color-value-input[disabled] { .color-value-input[disabled] {
@@ -378,7 +395,7 @@ body {
} }
.env-overrides-table code { .env-overrides-table code {
font-family: "Fira Code", "Consolas", "Courier New", monospace; font-family: 'Fira Code', 'Consolas', 'Courier New', monospace;
font-size: 0.85rem; font-size: 0.85rem;
} }
@@ -391,7 +408,7 @@ body {
border-radius: 999px; border-radius: 999px;
font-weight: 600; font-weight: 600;
text-decoration: none; text-decoration: none;
background: var(--color-primary); background: var(--brand);
color: var(--color-text-invert); color: var(--color-text-invert);
transition: transform 0.2s ease, box-shadow 0.2s ease; transition: transform 0.2s ease, box-shadow 0.2s ease;
} }
@@ -410,26 +427,27 @@ body {
} }
.metric-card { .metric-card {
background: var(--color-surface); background: var(--card);
border-radius: 12px; border-radius: var(--radius);
padding: 1.2rem 1.4rem; padding: 1.2rem 1.4rem;
box-shadow: 0 4px 14px var(--color-panel-shadow); box-shadow: var(--shadow);
display: flex; display: flex;
flex-direction: column; flex-direction: column;
gap: 0.35rem; gap: 0.35rem;
border: 1px solid var(--color-border);
} }
.metric-label { .metric-label {
font-size: 0.82rem; font-size: 0.82rem;
text-transform: uppercase; text-transform: uppercase;
letter-spacing: 0.04em; letter-spacing: 0.04em;
color: var(--color-text-muted); color: var(--muted);
} }
.metric-value { .metric-value {
font-size: 1.45rem; font-size: 1.45rem;
font-weight: 700; font-weight: 700;
color: var(--color-text-dark); color: var(--muted);
} }
.dashboard-charts { .dashboard-charts {
@@ -522,7 +540,7 @@ body {
} }
.list-detail { .list-detail {
color: var(--color-text-secondary); color: var(--muted);
font-size: 0.95rem; font-size: 0.95rem;
} }
@@ -532,7 +550,7 @@ body {
} }
.btn.is-loading::after { .btn.is-loading::after {
content: ""; content: '';
width: 0.85rem; width: 0.85rem;
height: 0.85rem; height: 0.85rem;
border: 2px solid rgba(255, 255, 255, 0.6); border: 2px solid rgba(255, 255, 255, 0.6);
@@ -550,7 +568,7 @@ body {
} }
.panel { .panel {
background-color: var(--color-surface); background-color: var(--card);
border-radius: var(--panel-radius); border-radius: var(--panel-radius);
padding: var(--space-xl); padding: var(--space-xl);
box-shadow: 0 2px 8px var(--color-panel-shadow); box-shadow: 0 2px 8px var(--color-panel-shadow);
@@ -560,7 +578,7 @@ body {
.panel h2, .panel h2,
.panel h3 { .panel h3 {
font-weight: 700; font-weight: 700;
color: var(--color-text-dark); color: var(--text);
margin: 0 0 var(--space-sm); margin: 0 0 var(--space-sm);
} }
@@ -583,7 +601,7 @@ body {
flex-direction: column; flex-direction: column;
gap: var(--space-sm); gap: var(--space-sm);
font-weight: 600; font-weight: 600;
color: var(--color-text-strong); color: var(--text);
} }
.form-grid input, .form-grid input,
@@ -598,7 +616,7 @@ body {
.form-grid input:focus, .form-grid input:focus,
.form-grid textarea:focus, .form-grid textarea:focus,
.form-grid select:focus { .form-grid select:focus {
outline: 2px solid var(--color-primary-strong); outline: 2px solid var(--brand-2);
outline-offset: 1px; outline-offset: 1px;
} }
@@ -624,13 +642,13 @@ body {
} }
.btn.primary { .btn.primary {
background-color: var(--color-primary-strong); background-color: var(--brand-2);
color: var(--color-text-invert); color: var(--color-text-invert);
} }
.btn.primary:hover, .btn.primary:hover,
.btn.primary:focus { .btn.primary:focus {
background-color: var(--color-primary-stronger); background-color: var(--brand-3);
} }
.result-output { .result-output {
@@ -638,14 +656,14 @@ body {
color: var(--color-surface-alt); color: var(--color-surface-alt);
padding: 1rem; padding: 1rem;
border-radius: 8px; border-radius: 8px;
font-family: "Fira Code", "Consolas", "Courier New", monospace; font-family: 'Fira Code', 'Consolas', 'Courier New', monospace;
overflow-x: auto; overflow-x: auto;
margin-top: 1.5rem; margin-top: 1.5rem;
} }
.monospace-input { .monospace-input {
width: 100%; width: 100%;
font-family: "Fira Code", "Consolas", "Courier New", monospace; font-family: 'Fira Code', 'Consolas', 'Courier New', monospace;
min-height: 120px; min-height: 120px;
} }
@@ -670,7 +688,7 @@ table {
} }
thead { thead {
background-color: var(--color-primary); background-color: var(--brand);
color: var(--color-text-invert); color: var(--color-text-invert);
} }
@@ -687,7 +705,7 @@ tbody tr:nth-child(even) {
.empty-state { .empty-state {
margin-top: 1.5rem; margin-top: 1.5rem;
color: var(--color-text-muted); color: var(--muted);
font-style: italic; font-style: italic;
} }
@@ -701,15 +719,15 @@ tbody tr:nth-child(even) {
} }
.feedback.success { .feedback.success {
color: var(--color-success); color: var(--accent);
} }
.feedback.error { .feedback.error {
color: var(--color-error); color: var(--danger);
} }
.site-footer { .site-footer {
background-color: var(--color-primary); background-color: var(--brand);
color: var(--color-text-invert); color: var(--color-text-invert);
margin-top: 3rem; margin-top: 3rem;
} }

134
static/js/theme.js Normal file
View File

@@ -0,0 +1,134 @@
// static/js/theme.js
document.addEventListener('DOMContentLoaded', () => {
const themeSettingsForm = document.getElementById('theme-settings-form');
const colorInputs = themeSettingsForm
? themeSettingsForm.querySelectorAll('input[type="color"]')
: [];
// Function to apply theme settings to CSS variables
function applyTheme(theme) {
const root = document.documentElement;
if (theme.primary_color)
root.style.setProperty('--color-primary', theme.primary_color);
if (theme.secondary_color)
root.style.setProperty('--color-secondary', theme.secondary_color);
if (theme.accent_color)
root.style.setProperty('--color-accent', theme.accent_color);
if (theme.background_color)
root.style.setProperty('--color-background', theme.background_color);
if (theme.text_color)
root.style.setProperty('--color-text-primary', theme.text_color);
// Add other theme properties as needed
}
// Save theme to local storage
function saveTheme(theme) {
localStorage.setItem('user-theme', JSON.stringify(theme));
}
// Load theme from local storage
function loadTheme() {
const savedTheme = localStorage.getItem('user-theme');
return savedTheme ? JSON.parse(savedTheme) : null;
}
// Real-time preview for color inputs
colorInputs.forEach((input) => {
input.addEventListener('input', (event) => {
const cssVar = `--color-${event.target.id.replace('-', '_')}`;
document.documentElement.style.setProperty(cssVar, event.target.value);
});
});
const THEME_API_URL = '/api/settings/theme';
const normalizeTheme = (theme) => {
if (!theme || typeof theme !== 'object') {
return {};
}
const {
theme_name,
primary_color,
secondary_color,
accent_color,
background_color,
text_color,
} = theme;
return {
theme_name,
primary_color,
secondary_color,
accent_color,
background_color,
text_color,
};
};
if (themeSettingsForm) {
themeSettingsForm.addEventListener('submit', async (event) => {
event.preventDefault();
const formData = new FormData(themeSettingsForm);
const themeData = Object.fromEntries(formData.entries());
try {
const response = await fetch(THEME_API_URL, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(themeData),
});
if (response.ok) {
const payload = await response.json();
const savedTheme = normalizeTheme(payload?.theme ?? themeData);
alert('Theme settings saved successfully!');
applyTheme(savedTheme);
saveTheme(savedTheme);
} else {
const errorData = await response.json();
alert(`Error saving theme settings: ${errorData.detail}`);
}
} catch (error) {
console.error('Error:', error);
alert('An error occurred while saving theme settings.');
}
});
}
// Load and apply theme on page load
const initialTheme = loadTheme();
if (initialTheme) {
applyTheme(initialTheme);
// Populate form fields if on the theme settings page
if (themeSettingsForm) {
for (const key in initialTheme) {
const input = themeSettingsForm.querySelector(
`#${key.replace('_', '-')}`
);
if (input) {
input.value = initialTheme[key];
}
}
}
} else {
// If no saved theme, load from backend (if available)
async function loadAndApplyThemeFromServer() {
try {
const response = await fetch(THEME_API_URL);
if (response.ok) {
const theme = normalizeTheme(await response.json());
applyTheme(theme);
saveTheme(theme); // Save to local storage for future use
} else {
console.error('Failed to load theme settings from server');
}
} catch (error) {
console.error('Error loading theme settings from server:', error);
}
}
loadAndApplyThemeFromServer();
}
});

View File

@@ -20,5 +20,6 @@
</div> </div>
</div> </div>
{% block scripts %}{% endblock %} {% block scripts %}{% endblock %}
<script src="/static/js/theme.js"></script>
</body> </body>
</html> </html>

View File

@@ -0,0 +1,17 @@
{% extends "base.html" %}
{% block title %}Forgot Password{% endblock %}
{% block content %}
<div class="container">
<h1>Forgot Password</h1>
<form id="forgot-password-form">
<div class="form-group">
<label for="email">Email:</label>
<input type="email" id="email" name="email" required>
</div>
<button type="submit">Reset Password</button>
</form>
<p>Remember your password? <a href="/login">Login here</a></p>
</div>
{% endblock %}

22
templates/login.html Normal file
View File

@@ -0,0 +1,22 @@
{% extends "base.html" %}
{% block title %}Login{% endblock %}
{% block content %}
<div class="container">
<h1>Login</h1>
<form id="login-form">
<div class="form-group">
<label for="username">Username:</label>
<input type="text" id="username" name="username" required>
</div>
<div class="form-group">
<label for="password">Password:</label>
<input type="password" id="password" name="password" required>
</div>
<button type="submit">Login</button>
</form>
<p>Don't have an account? <a href="/register">Register here</a></p>
<p><a href="/forgot-password">Forgot password?</a></p>
</div>
{% endblock %}

View File

@@ -1,88 +1,49 @@
{% set nav_groups = [ {% set nav_groups = [ { "label": "Dashboard", "links": [ {"href": "/", "label":
{ "Dashboard"}, ], }, { "label": "Overview", "links": [ {"href": "/ui/parameters",
"label": "Dashboard", "label": "Parameters"}, {"href": "/ui/costs", "label": "Costs"}, {"href":
"links": [ "/ui/consumption", "label": "Consumption"}, {"href": "/ui/production", "label":
{"href": "/", "label": "Dashboard"}, "Production"}, { "href": "/ui/equipment", "label": "Equipment", "children": [
], {"href": "/ui/maintenance", "label": "Maintenance"}, ], }, ], }, { "label":
}, "Simulations", "links": [ {"href": "/ui/simulations", "label": "Simulations"},
{ ], }, { "label": "Analytics", "links": [ {"href": "/ui/reporting", "label":
"label": "Scenarios", "Reporting"}, ], }, { "label": "Settings", "links": [ { "href": "/ui/settings",
"links": [ "label": "Settings", "children": [ {"href": "/theme-settings", "label":
{"href": "/ui/scenarios", "label": "Overview"}, "Themes"}, {"href": "/ui/currencies", "label": "Currency Management"}, ], }, ],
{"href": "/ui/parameters", "label": "Parameters"}, }, ] %}
{"href": "/ui/costs", "label": "Costs"},
{"href": "/ui/consumption", "label": "Consumption"},
{"href": "/ui/production", "label": "Production"},
{
"href": "/ui/equipment",
"label": "Equipment",
"children": [
{"href": "/ui/maintenance", "label": "Maintenance"},
],
},
],
},
{
"label": "Analysis",
"links": [
{"href": "/ui/simulations", "label": "Simulations"},
{"href": "/ui/reporting", "label": "Reporting"},
],
},
{
"label": "Settings",
"links": [
{
"href": "/ui/settings",
"label": "Settings",
"children": [
{"href": "/ui/currencies", "label": "Currency Management"},
],
},
],
},
] %}
<nav class="sidebar-nav" aria-label="Primary navigation"> <nav class="sidebar-nav" aria-label="Primary navigation">
{% set current_path = request.url.path if request else "" %} {% set current_path = request.url.path if request else "" %} {% for group in
{% for group in nav_groups %} nav_groups %}
<div class="sidebar-section"> <div class="sidebar-section">
<div class="sidebar-section-label">{{ group.label }}</div> <div class="sidebar-section-label">{{ group.label }}</div>
<div class="sidebar-section-links"> <div class="sidebar-section-links">
{% for link in group.links %} {% for link in group.links %} {% set href = link.href %} {% if href == "/"
{% set href = link.href %} %} {% set is_active = current_path == "/" %} {% else %} {% set is_active =
{% if href == "/" %} current_path.startswith(href) %} {% endif %}
{% set is_active = current_path == "/" %} <div class="sidebar-link-block">
{% else %} <a
{% set is_active = current_path.startswith(href) %} href="{{ href }}"
{% endif %} class="sidebar-link{% if is_active %} is-active{% endif %}"
<div class="sidebar-link-block"> >
<a {{ link.label }}
href="{{ href }}" </a>
class="sidebar-link{% if is_active %} is-active{% endif %}" {% if link.children %}
> <div class="sidebar-sublinks">
{{ link.label }} {% for child in link.children %} {% if child.href == "/" %} {% set
</a> child_active = current_path == "/" %} {% else %} {% set child_active =
{% if link.children %} current_path.startswith(child.href) %} {% endif %}
<div class="sidebar-sublinks"> <a
{% for child in link.children %} href="{{ child.href }}"
{% if child.href == "/" %} class="sidebar-sublink{% if child_active %} is-active{% endif %}"
{% set child_active = current_path == "/" %} >
{% else %} {{ child.label }}
{% set child_active = current_path.startswith(child.href) %} </a>
{% endif %} {% endfor %}
<a </div>
href="{{ child.href }}" {% endif %}
class="sidebar-sublink{% if child_active %} is-active{% endif %}"
>
{{ child.label }}
</a>
{% endfor %}
</div>
{% endif %}
</div>
{% endfor %}
</div> </div>
{% endfor %}
</div> </div>
</div>
{% endfor %} {% endfor %}
</nav> </nav>

31
templates/profile.html Normal file
View File

@@ -0,0 +1,31 @@
{% extends "base.html" %}
{% block title %}Profile{% endblock %}
{% block content %}
<div class="container">
<h1>User Profile</h1>
<p>Username: <span id="profile-username"></span></p>
<p>Email: <span id="profile-email"></span></p>
<button id="edit-profile-button">Edit Profile</button>
<div id="edit-profile-form" style="display:none;">
<h2>Edit Profile</h2>
<form>
<div class="form-group">
<label for="edit-username">Username:</label>
<input type="text" id="edit-username" name="username">
</div>
<div class="form-group">
<label for="edit-email">Email:</label>
<input type="email" id="edit-email" name="email">
</div>
<div class="form-group">
<label for="edit-password">New Password:</label>
<input type="password" id="edit-password" name="password">
</div>
<button type="submit">Save Changes</button>
</form>
</div>
</div>
{% endblock %}

25
templates/register.html Normal file
View File

@@ -0,0 +1,25 @@
{% extends "base.html" %}
{% block title %}Register{% endblock %}
{% block content %}
<div class="container">
<h1>Register</h1>
<form id="register-form">
<div class="form-group">
<label for="username">Username:</label>
<input type="text" id="username" name="username" required>
</div>
<div class="form-group">
<label for="email">Email:</label>
<input type="email" id="email" name="email" required>
</div>
<div class="form-group">
<label for="password">Password:</label>
<input type="password" id="password" name="password" required>
</div>
<button type="submit">Register</button>
</form>
<p>Already have an account? <a href="/login">Login here</a></p>
</div>
{% endblock %}

View File

@@ -1,113 +1,26 @@
{% extends "base.html" %} {% extends "base.html" %} {% block title %}Settings · CalMiner{% endblock %} {%
block content %}
{% block title %}Settings · CalMiner{% endblock %} <section class="page-header">
<div>
{% block content %} <h1>Settings</h1>
<section class="page-header"> <p class="page-subtitle">
<div> Configure platform defaults and administrative options.
<h1>Settings</h1> </p>
<p class="page-subtitle">Configure platform defaults and administrative options.</p> </div>
</div> </section>
</section> <section class="settings-grid">
<section class="settings-grid"> <article class="settings-card">
<article class="settings-card"> <h2>Currency Management</h2>
<h2>Currency Management</h2> <p>
<p>Manage available currencies, symbols, and default selections from the Currency Management page.</p> Manage available currencies, symbols, and default selections from the
<a class="button-link" href="/ui/currencies">Go to Currency Management</a> Currency Management page.
</article> </p>
<article class="settings-card"> <a class="button-link" href="/ui/currencies">Go to Currency Management</a>
<h2>Visual Theme</h2> </article>
<p>Adjust CalMiner theme colors and preview changes instantly.</p> <article class="settings-card">
<p class="settings-card-note">Changes save to the settings table and apply across the UI after submission. Environment overrides (if configured) remain read-only.</p> <h2>Themes</h2>
</article> <p>Adjust CalMiner theme colors and preview changes instantly.</p>
</section> <a class="button-link" href="/theme-settings">Go to Theme Settings</a>
</article>
<section class="panel" id="theme-settings" data-api="/api/settings/css"> </section>
<header class="panel-header">
<div>
<h2>Theme Colors</h2>
<p class="chart-subtitle">Update global CSS variables to customize CalMiner&apos;s appearance.</p>
</div>
</header>
<form id="theme-settings-form" class="form-grid color-form-grid" novalidate>
{% for key, value in css_variables.items() %}
{% set env_meta = css_env_override_meta.get(key) %}
<label class="color-form-field{% if env_meta %} is-env-override{% endif %}" data-variable="{{ key }}">
<span class="color-field-header">
<span class="color-field-name">{{ key }}</span>
<span class="color-field-default">Default: {{ css_defaults[key] }}</span>
</span>
<span class="color-field-helper" id="color-helper-{{ loop.index }}">Accepts hex, rgb(a), or hsl(a) values.</span>
{% if env_meta %}
<span class="color-env-flag">Managed via {{ env_meta.env_var }} (read-only)</span>
{% endif %}
<span class="color-input-row">
<input
type="text"
name="{{ key }}"
class="color-value-input"
value="{{ value }}"
autocomplete="off"
aria-describedby="color-helper-{{ loop.index }}"
{% if env_meta %}disabled aria-disabled="true" data-env-override="true"{% endif %}
/>
<span class="color-preview" aria-hidden="true" style="background: {{ value }}"></span>
</span>
</label>
{% endfor %}
<div class="button-row">
<button type="submit" class="btn primary">Save Theme</button>
<button type="button" class="btn" id="theme-settings-reset">Reset to Defaults</button>
</div>
</form>
{% from "partials/components.html" import feedback with context %}
{{ feedback("theme-settings-feedback") }}
</section>
<section class="panel" id="theme-env-overrides">
<header class="panel-header">
<div>
<h2>Environment Overrides</h2>
<p class="chart-subtitle">The following CSS variables are controlled via environment variables and take precedence over database values.</p>
</div>
</header>
{% if css_env_override_rows %}
<div class="table-container env-overrides-table">
<table aria-label="Environment-controlled theme variables">
<thead>
<tr>
<th scope="col">CSS Variable</th>
<th scope="col">Environment Variable</th>
<th scope="col">Value</th>
</tr>
</thead>
<tbody>
{% for row in css_env_override_rows %}
<tr>
<td><code>{{ row.css_key }}</code></td>
<td><code>{{ row.env_var }}</code></td>
<td><code>{{ row.value }}</code></td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
{% else %}
<p class="empty-state">No environment overrides configured.</p>
{% endif %}
</section>
{% endblock %}
{% block scripts %}
{{ super() }}
<script id="theme-settings-data" type="application/json">
{{ {
"variables": css_variables,
"defaults": css_defaults,
"envOverrides": css_env_overrides,
"envSources": css_env_override_rows
} | tojson }}
</script>
<script src="/static/js/settings.js"></script>
{% endblock %} {% endblock %}

View File

@@ -0,0 +1,125 @@
{% extends "base.html" %} {% block title %}Theme Settings · CalMiner{% endblock
%} {% block content %}
<section class="page-header">
<div>
<h1>Theme Settings</h1>
<p class="page-subtitle">
Adjust CalMiner theme colors and preview changes instantly.
</p>
</div>
</section>
<section class="panel" id="theme-settings" data-api="/api/settings/css">
<header class="panel-header">
<div>
<h2>Theme Colors</h2>
<p class="chart-subtitle">
Update global CSS variables to customize CalMiner&apos;s appearance.
</p>
</div>
</header>
<form id="theme-settings-form" class="form-grid color-form-grid" novalidate>
{% for key, value in css_variables.items() %} {% set env_meta =
css_env_override_meta.get(key) %}
<label
class="color-form-field{% if env_meta %} is-env-override{% endif %}"
data-variable="{{ key }}"
>
<span class="color-field-header">
<span class="color-field-name">{{ key }}</span>
<span class="color-field-default"
>Default: {{ css_defaults[key] }}</span
>
</span>
<span class="color-field-helper" id="color-helper-{{ loop.index }}"
>Accepts hex, rgb(a), or hsl(a) values.</span
>
{% if env_meta %}
<span class="color-env-flag"
>Managed via {{ env_meta.env_var }} (read-only)</span
>
{% endif %}
<span class="color-input-row">
<input
type="text"
name="{{ key }}"
class="color-value-input"
value="{{ value }}"
autocomplete="off"
aria-describedby="color-helper-{{ loop.index }}"
{%
if
env_meta
%}disabled
aria-disabled="true"
data-env-override="true"
{%
endif
%}
/>
<span
class="color-preview"
aria-hidden="true"
style="background: {{ value }}"
></span>
</span>
</label>
{% endfor %}
<div class="button-row">
<button type="submit" class="btn primary">Save Theme</button>
<button type="button" class="btn" id="theme-settings-reset">
Reset to Defaults
</button>
</div>
</form>
{% from "partials/components.html" import feedback with context %} {{
feedback("theme-settings-feedback") }}
</section>
<section class="panel" id="theme-env-overrides">
<header class="panel-header">
<div>
<h2>Environment Overrides</h2>
<p class="chart-subtitle">
The following CSS variables are controlled via environment variables and
take precedence over database values.
</p>
</div>
</header>
{% if css_env_override_rows %}
<div class="table-container env-overrides-table">
<table aria-label="Environment-controlled theme variables">
<thead>
<tr>
<th scope="col">CSS Variable</th>
<th scope="col">Environment Variable</th>
<th scope="col">Value</th>
</tr>
</thead>
<tbody>
{% for row in css_env_override_rows %}
<tr>
<td><code>{{ row.css_key }}</code></td>
<td><code>{{ row.env_var }}</code></td>
<td><code>{{ row.value }}</code></td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
{% else %}
<p class="empty-state">No environment overrides configured.</p>
{% endif %}
</section>
{% endblock %} {% block scripts %} {{ super() }}
<script id="theme-settings-data" type="application/json">
{{ {
"variables": css_variables,
"defaults": css_defaults,
"envOverrides": css_env_overrides,
"envSources": css_env_override_rows
} | tojson }}
</script>
<script src="/static/js/settings.js"></script>
{% endblock %}

View File

@@ -4,6 +4,7 @@ import time
from typing import Dict, Generator from typing import Dict, Generator
import pytest import pytest
# type: ignore[import] # type: ignore[import]
from playwright.sync_api import Browser, Page, Playwright, sync_playwright from playwright.sync_api import Browser, Page, Playwright, sync_playwright
@@ -70,10 +71,17 @@ def seed_default_currencies(live_server: str) -> None:
seeds = [ seeds = [
{"code": "EUR", "name": "Euro", "symbol": "EUR", "is_active": True}, {"code": "EUR", "name": "Euro", "symbol": "EUR", "is_active": True},
{"code": "CLP", "name": "Chilean Peso", "symbol": "CLP$", "is_active": True}, {
"code": "CLP",
"name": "Chilean Peso",
"symbol": "CLP$",
"is_active": True,
},
] ]
with httpx.Client(base_url=live_server, timeout=5.0, trust_env=False) as client: with httpx.Client(
base_url=live_server, timeout=5.0, trust_env=False
) as client:
try: try:
response = client.get("/api/currencies/?include_inactive=true") response = client.get("/api/currencies/?include_inactive=true")
response.raise_for_status() response.raise_for_status()
@@ -128,8 +136,12 @@ def page(browser: Browser, live_server: str) -> Generator[Page, None, None]:
def _prepare_database_environment(env: Dict[str, str]) -> Dict[str, str]: def _prepare_database_environment(env: Dict[str, str]) -> Dict[str, str]:
"""Ensure granular database env vars are available for the app under test.""" """Ensure granular database env vars are available for the app under test."""
required = ("DATABASE_HOST", "DATABASE_USER", required = (
"DATABASE_NAME", "DATABASE_PASSWORD") "DATABASE_HOST",
"DATABASE_USER",
"DATABASE_NAME",
"DATABASE_PASSWORD",
)
if all(env.get(key) for key in required): if all(env.get(key) for key in required):
return env return env

View File

@@ -7,7 +7,9 @@ def test_consumption_form_loads(page: Page):
"""Verify the consumption form page loads correctly.""" """Verify the consumption form page loads correctly."""
page.goto("/ui/consumption") page.goto("/ui/consumption")
expect(page).to_have_title("Consumption · CalMiner") expect(page).to_have_title("Consumption · CalMiner")
expect(page.locator("h2:has-text('Add Consumption Record')")).to_be_visible() expect(
page.locator("h2:has-text('Add Consumption Record')")
).to_be_visible()
def test_create_consumption_item(page: Page): def test_create_consumption_item(page: Page):

View File

@@ -55,7 +55,9 @@ def test_create_capex_and_opex_items(page: Page):
).to_be_visible() ).to_be_visible()
# Verify the feedback messages. # Verify the feedback messages.
expect(page.locator("#capex-feedback") expect(page.locator("#capex-feedback")).to_have_text(
).to_have_text("Entry saved successfully.") "Entry saved successfully."
expect(page.locator("#opex-feedback") )
).to_have_text("Entry saved successfully.") expect(page.locator("#opex-feedback")).to_have_text(
"Entry saved successfully."
)

View File

@@ -12,7 +12,8 @@ def _unique_currency_code(existing: set[str]) -> str:
if candidate not in existing and candidate != "USD": if candidate not in existing and candidate != "USD":
return candidate return candidate
raise AssertionError( raise AssertionError(
"Unable to generate a unique currency code for the test run.") "Unable to generate a unique currency code for the test run."
)
def _metric_value(page: Page, element_id: str) -> int: def _metric_value(page: Page, element_id: str) -> int:
@@ -42,8 +43,9 @@ def test_currency_workflow_create_update_toggle(page: Page) -> None:
expect(page.locator("h2:has-text('Currency Overview')")).to_be_visible() expect(page.locator("h2:has-text('Currency Overview')")).to_be_visible()
code_cells = page.locator("#currencies-table-body tr td:nth-child(1)") code_cells = page.locator("#currencies-table-body tr td:nth-child(1)")
existing_codes = {text.strip().upper() existing_codes = {
for text in code_cells.all_inner_texts()} text.strip().upper() for text in code_cells.all_inner_texts()
}
total_before = _metric_value(page, "currency-metric-total") total_before = _metric_value(page, "currency-metric-total")
active_before = _metric_value(page, "currency-metric-active") active_before = _metric_value(page, "currency-metric-active")
@@ -109,7 +111,9 @@ def test_currency_workflow_create_update_toggle(page: Page) -> None:
toggle_button = row.locator("button[data-action='toggle']") toggle_button = row.locator("button[data-action='toggle']")
expect(toggle_button).to_have_text("Activate") expect(toggle_button).to_have_text("Activate")
with page.expect_response(f"**/api/currencies/{new_code}/activation") as toggle_info: with page.expect_response(
f"**/api/currencies/{new_code}/activation"
) as toggle_info:
toggle_button.click() toggle_button.click()
toggle_response = toggle_info.value toggle_response = toggle_info.value
assert toggle_response.status == 200 assert toggle_response.status == 200
@@ -126,5 +130,6 @@ def test_currency_workflow_create_update_toggle(page: Page) -> None:
_expect_feedback(page, f"Currency {new_code} activated.") _expect_feedback(page, f"Currency {new_code} activated.")
expect(row.locator("td").nth(3)).to_contain_text("Active") expect(row.locator("td").nth(3)).to_contain_text("Active")
expect(row.locator("button[data-action='toggle']") expect(row.locator("button[data-action='toggle']")).to_have_text(
).to_have_text("Deactivate") "Deactivate"
)

View File

@@ -38,11 +38,8 @@ def test_create_equipment_item(page: Page):
# Verify the new item appears in the table. # Verify the new item appears in the table.
page.select_option("#equipment-scenario-filter", label=scenario_name) page.select_option("#equipment-scenario-filter", label=scenario_name)
expect( expect(
page.locator("#equipment-table-body tr").filter( page.locator("#equipment-table-body tr").filter(has_text=equipment_name)
has_text=equipment_name
)
).to_be_visible() ).to_be_visible()
# Verify the feedback message. # Verify the feedback message.
expect(page.locator("#equipment-feedback") expect(page.locator("#equipment-feedback")).to_have_text("Equipment saved.")
).to_have_text("Equipment saved.")

View File

@@ -53,5 +53,6 @@ def test_create_maintenance_item(page: Page):
).to_be_visible() ).to_be_visible()
# Verify the feedback message. # Verify the feedback message.
expect(page.locator("#maintenance-feedback") expect(page.locator("#maintenance-feedback")).to_have_text(
).to_have_text("Maintenance entry saved.") "Maintenance entry saved."
)

View File

@@ -43,5 +43,6 @@ def test_create_production_item(page: Page):
).to_be_visible() ).to_be_visible()
# Verify the feedback message. # Verify the feedback message.
expect(page.locator("#production-feedback") expect(page.locator("#production-feedback")).to_have_text(
).to_have_text("Production output saved.") "Production output saved."
)

View File

@@ -39,4 +39,5 @@ def test_create_new_scenario(page: Page):
feedback = page.locator("#feedback") feedback = page.locator("#feedback")
expect(feedback).to_be_visible() expect(feedback).to_be_visible()
expect(feedback).to_have_text( expect(feedback).to_have_text(
f'Scenario "{scenario_name}" created successfully.') f'Scenario "{scenario_name}" created successfully.'
)

View File

@@ -5,7 +5,11 @@ from playwright.sync_api import Page, expect
UI_ROUTES = [ UI_ROUTES = [
("/", "Dashboard · CalMiner", "Operations Overview"), ("/", "Dashboard · CalMiner", "Operations Overview"),
("/ui/dashboard", "Dashboard · CalMiner", "Operations Overview"), ("/ui/dashboard", "Dashboard · CalMiner", "Operations Overview"),
("/ui/scenarios", "Scenario Management · CalMiner", "Create a New Scenario"), (
"/ui/scenarios",
"Scenario Management · CalMiner",
"Create a New Scenario",
),
("/ui/parameters", "Process Parameters · CalMiner", "Scenario Parameters"), ("/ui/parameters", "Process Parameters · CalMiner", "Scenario Parameters"),
("/ui/settings", "Settings · CalMiner", "Settings"), ("/ui/settings", "Settings · CalMiner", "Settings"),
("/ui/costs", "Costs · CalMiner", "Cost Overview"), ("/ui/costs", "Costs · CalMiner", "Cost Overview"),
@@ -20,35 +24,44 @@ UI_ROUTES = [
@pytest.mark.parametrize("url, title, heading", UI_ROUTES) @pytest.mark.parametrize("url, title, heading", UI_ROUTES)
def test_ui_pages_load_correctly(page: Page, url: str, title: str, heading: str): def test_ui_pages_load_correctly(
page: Page, url: str, title: str, heading: str
):
"""Verify that all UI pages load with the correct title and a visible heading.""" """Verify that all UI pages load with the correct title and a visible heading."""
page.goto(url) page.goto(url)
expect(page).to_have_title(title) expect(page).to_have_title(title)
# The app uses a mix of h1 and h2 for main page headings. # The app uses a mix of h1 and h2 for main page headings.
heading_locator = page.locator( heading_locator = page.locator(
f"h1:has-text('{heading}'), h2:has-text('{heading}')") f"h1:has-text('{heading}'), h2:has-text('{heading}')"
)
expect(heading_locator.first).to_be_visible() expect(heading_locator.first).to_be_visible()
def test_settings_theme_form_interaction(page: Page): def test_settings_theme_form_interaction(page: Page):
page.goto("/ui/settings") page.goto("/theme-settings")
expect(page).to_have_title("Settings · CalMiner") expect(page).to_have_title("Theme Settings · CalMiner")
env_rows = page.locator("#theme-env-overrides tbody tr") env_rows = page.locator("#theme-env-overrides tbody tr")
disabled_inputs = page.locator( disabled_inputs = page.locator(
"#theme-settings-form input.color-value-input[disabled]") "#theme-settings-form input.color-value-input[disabled]"
)
env_row_count = env_rows.count() env_row_count = env_rows.count()
disabled_count = disabled_inputs.count() disabled_count = disabled_inputs.count()
assert disabled_count == env_row_count assert disabled_count == env_row_count
color_input = page.locator( color_input = page.locator(
"#theme-settings-form input[name='--color-primary']") "#theme-settings-form input[name='--color-primary']"
)
expect(color_input).to_be_visible() expect(color_input).to_be_visible()
expect(color_input).to_be_enabled() expect(color_input).to_be_enabled()
original_value = color_input.input_value() original_value = color_input.input_value()
candidate_values = ("#114455", "#225566") candidate_values = ("#114455", "#225566")
new_value = candidate_values[0] if original_value != candidate_values[0] else candidate_values[1] new_value = (
candidate_values[0]
if original_value != candidate_values[0]
else candidate_values[1]
)
color_input.fill(new_value) color_input.fill(new_value)
page.click("#theme-settings-form button[type='submit']") page.click("#theme-settings-form button[type='submit']")

View File

@@ -27,7 +27,8 @@ engine = create_engine(
poolclass=StaticPool, poolclass=StaticPool,
) )
TestingSessionLocal = sessionmaker( TestingSessionLocal = sessionmaker(
autocommit=False, autoflush=False, bind=engine) autocommit=False, autoflush=False, bind=engine
)
@pytest.fixture(scope="session", autouse=True) @pytest.fixture(scope="session", autouse=True)
@@ -37,19 +38,24 @@ def setup_database() -> Generator[None, None, None]:
application_setting, application_setting,
capex, capex,
consumption, consumption,
currency,
distribution, distribution,
equipment, equipment,
maintenance, maintenance,
opex, opex,
parameters, parameters,
production_output, production_output,
role,
scenario, scenario,
simulation_result, simulation_result,
theme_setting,
user,
) # noqa: F401 - imported for side effects ) # noqa: F401 - imported for side effects
_ = ( _ = (
capex, capex,
consumption, consumption,
currency,
distribution, distribution,
equipment, equipment,
maintenance, maintenance,
@@ -57,8 +63,11 @@ def setup_database() -> Generator[None, None, None]:
opex, opex,
parameters, parameters,
production_output, production_output,
role,
scenario, scenario,
simulation_result, simulation_result,
theme_setting,
user,
) )
Base.metadata.create_all(bind=engine) Base.metadata.create_all(bind=engine)
@@ -86,22 +95,23 @@ def api_client(db_session: Session) -> Generator[TestClient, None, None]:
finally: finally:
pass pass
from routes import dependencies as route_dependencies from routes.dependencies import get_db
app.dependency_overrides[route_dependencies.get_db] = override_get_db app.dependency_overrides[get_db] = override_get_db
with TestClient(app) as client: with TestClient(app) as client:
yield client yield client
app.dependency_overrides.pop(route_dependencies.get_db, None) app.dependency_overrides.pop(get_db, None)
@pytest.fixture() @pytest.fixture()
def seeded_ui_data(db_session: Session) -> Generator[Dict[str, Any], None, None]: def seeded_ui_data(
db_session: Session,
) -> Generator[Dict[str, Any], None, None]:
"""Populate a scenario with representative related records for UI tests.""" """Populate a scenario with representative related records for UI tests."""
scenario_name = f"Scenario Alpha {uuid4()}" scenario_name = f"Scenario Alpha {uuid4()}"
scenario = Scenario(name=scenario_name, scenario = Scenario(name=scenario_name, description="Seeded UI scenario")
description="Seeded UI scenario")
db_session.add(scenario) db_session.add(scenario)
db_session.flush() db_session.flush()
@@ -161,7 +171,9 @@ def seeded_ui_data(db_session: Session) -> Generator[Dict[str, Any], None, None]
iteration=index, iteration=index,
result=value, result=value,
) )
for index, value in enumerate((950_000.0, 975_000.0, 990_000.0), start=1) for index, value in enumerate(
(950_000.0, 975_000.0, 990_000.0), start=1
)
] ]
db_session.add(maintenance) db_session.add(maintenance)
@@ -196,11 +208,15 @@ def seeded_ui_data(db_session: Session) -> Generator[Dict[str, Any], None, None]
@pytest.fixture() @pytest.fixture()
def invalid_request_payloads(db_session: Session) -> Generator[Dict[str, Any], None, None]: def invalid_request_payloads(
db_session: Session,
) -> Generator[Dict[str, Any], None, None]:
"""Provide reusable invalid request bodies for exercising validation branches.""" """Provide reusable invalid request bodies for exercising validation branches."""
duplicate_name = f"Scenario Duplicate {uuid4()}" duplicate_name = f"Scenario Duplicate {uuid4()}"
existing = Scenario(name=duplicate_name, existing = Scenario(
description="Existing scenario for duplicate checks") name=duplicate_name,
description="Existing scenario for duplicate checks",
)
db_session.add(existing) db_session.add(existing)
db_session.commit() db_session.commit()

Some files were not shown because too many files have changed in this diff Show More