Files
calminer/docs/quickstart.md
zwitschi 0550928a2f
All checks were successful
Run Tests / e2e tests (push) Successful in 1m49s
Run Tests / unit tests (push) Successful in 11s
feat: Update CI workflows for Docker image build and deployment, enhance test configurations, and add testing documentation
2025-10-25 21:28:49 +02:00

14 KiB
Raw Blame History

Quickstart & Expanded Project Documentation

This document contains the expanded development, usage, testing, and migration guidance moved out of the top-level README for brevity.

Development

To get started locally:

# Clone the repository
git clone https://git.allucanget.biz/allucanget/calminer.git
cd calminer

# Create and activate a virtual environment
python -m venv .venv
.\.venv\Scripts\Activate.ps1

# Install dependencies
pip install -r requirements.txt

# Start the development server
uvicorn main:app --reload

Docker-based setup

To build and run the application using Docker instead of a local Python environment:

# Build the application image (multi-stage build keeps runtime small)
docker build -t calminer:latest .

# Start the container on port 8000
docker run --rm -p 8000:8000 calminer:latest

# Supply environment variables (e.g., Postgres connection)
docker run --rm -p 8000:8000 ^
  -e DATABASE_DRIVER="postgresql" ^
  -e DATABASE_HOST="db.host" ^
  -e DATABASE_PORT="5432" ^
  -e DATABASE_USER="calminer" ^
  -e DATABASE_PASSWORD="s3cret" ^
  -e DATABASE_NAME="calminer" ^
  -e DATABASE_SCHEMA="public" ^
  calminer:latest

If you maintain a Postgres or Redis dependency locally, consider authoring a docker compose stack that pairs them with the app container. The Docker image expects the database to be reachable and migrations executed before serving traffic.

Usage Overview

  • API base URL: http://localhost:8000/api
  • Key routes include creating scenarios, parameters, costs, consumption, production, equipment, maintenance, and reporting summaries. See the routes/ directory for full details.

Theme configuration

  • Open /ui/settings to access the Settings dashboard. The Theme Colors form lists every CSS variable persisted in the application_setting table. Updates apply immediately across the UI once saved.
  • Use the accompanying API endpoints for automation or integration tests:
    • GET /api/settings/css returns the active variables, defaults, and metadata describing any environment overrides.
    • PUT /api/settings/css accepts a payload such as {"variables": {"--color-primary": "#112233"}} and persists the change unless an environment override is in place.
  • Environment variables prefixed with CALMINER_THEME_ win over database values. For example, setting CALMINER_THEME_COLOR_PRIMARY="#112233" renders the corresponding input read-only and surfaces the override in the Environment Overrides table.
  • Acceptable values include hex (#rrggbb or #rrggbbaa), rgb()/rgba(), and hsl()/hsla() expressions with the expected number of components. Invalid inputs trigger a validation error and the API responds with HTTP 422.

Dashboard Preview

  1. Start the FastAPI server and navigate to /.
  2. Review the headline metrics, scenario snapshot table, and cost/activity charts sourced from the current database state.
  3. Use the "Refresh Dashboard" button to pull freshly aggregated data via /ui/dashboard/data without reloading the page.

Testing

Run the unit test suite:

pytest

E2E tests use Playwright and a session-scoped live_server fixture that starts the app at http://localhost:8001 for browser-driven tests.

Migrations & Baseline

A consolidated baseline migration (scripts/migrations/000_base.sql) captures all schema changes required for a fresh installation. The script is idempotent: it creates the currency and measurement_unit reference tables, provisions the application_setting store for configurable UI/system options, ensures consumption and production records expose unit metadata, and enforces the foreign keys used by CAPEX and OPEX.

Configure granular database settings in your PowerShell session before running migrations:

$env:DATABASE_DRIVER = 'postgresql'
$env:DATABASE_HOST = 'localhost'
$env:DATABASE_PORT = '5432'
$env:DATABASE_USER = 'calminer'
$env:DATABASE_PASSWORD = 's3cret'
$env:DATABASE_NAME = 'calminer'
$env:DATABASE_SCHEMA = 'public'
python scripts/setup_database.py --run-migrations --seed-data --dry-run
python scripts/setup_database.py --run-migrations --seed-data

The dry-run invocation reports which steps would execute without making changes. The live run applies the baseline (if not already recorded in schema_migrations) and seeds the reference data relied upon by the UI and API.

When --seed-data is supplied without --run-migrations, the bootstrap script automatically applies any pending SQL migrations first so the application_setting table (and future settings-backed features) are present before seeding.

The application still accepts DATABASE_URL as a fallback if the granular variables are not set.

Database bootstrap workflow

Provision or refresh a database instance with scripts/setup_database.py. Populate the required environment variables (an example lives at config/setup_test.env.example) and run:

# Load test credentials (PowerShell)
Get-Content .\config\setup_test.env.example |
  ForEach-Object {
    if ($_ -and -not $_.StartsWith('#')) {
      $name, $value = $_ -split '=', 2
      Set-Item -Path Env:$name -Value $value
    }
  }

# Dry-run to inspect the planned actions
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v

# Execute the full workflow
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v

Typical log output confirms:

  • Admin and application connections succeed for the supplied credentials.
  • Database and role creation are idempotent (already present when rerun).
  • SQLAlchemy metadata either reports missing tables or All tables already exist.
  • Migrations list pending files and finish with Applied N migrations (a new database reports Applied 1 migrations for 000_base.sql).

After a successful run the target database contains all application tables plus schema_migrations, and that table records each applied migration file. New installations only record 000_base.sql; upgraded environments retain historical entries alongside the baseline.

Local Postgres via Docker Compose

For local validation without installing Postgres directly, use the provided compose file:

docker compose -f docker-compose.postgres.yml up -d

Summary

  1. Start the Postgres container with docker compose -f docker-compose.postgres.yml up -d.
  2. Export the granular database environment variables (host 127.0.0.1, port 5433, database calminer_local, user/password calminer/secret).
  3. Run the setup script twice: first with --dry-run to preview actions, then without it to apply changes.
  4. When finished, stop and optionally remove the container/volume using docker compose -f docker-compose.postgres.yml down.

The service exposes Postgres 16 on localhost:5433 with database calminer_local and role calminer/secret. When the container is running, set the granular environment variables before invoking the setup script:

$env:DATABASE_DRIVER = 'postgresql'
$env:DATABASE_HOST = '127.0.0.1'
$env:DATABASE_PORT = '5433'
$env:DATABASE_USER = 'calminer'
$env:DATABASE_PASSWORD = 'secret'
$env:DATABASE_NAME = 'calminer_local'
$env:DATABASE_SCHEMA = 'public'

python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v

When testing is complete, shut down the container (and optional persistent volume) with:

docker compose -f docker-compose.postgres.yml down
docker volume rm calminer_postgres_local_postgres_data  # optional cleanup

Document successful runs (or issues encountered) in .github/instructions/DONE.TODO.md for future reference.

Seeding reference data

scripts/seed_data.py provides targeted control over the baseline datasets when the full setup script is not required:

python scripts/seed_data.py --currencies --units --dry-run
python scripts/seed_data.py --currencies --units

The seeder upserts the canonical currency catalog (USD, EUR, CLP, RMB, GBP, CAD, AUD) using ASCII-safe symbols (USD$, EUR, etc.) and the measurement units referenced by the UI (tonnes, kilograms, pounds, liters, cubic_meters, kilowatt_hours). The setup script invokes the same seeder when --seed-data is provided and verifies the expected rows afterward, warning if any are missing or inactive.

Rollback guidance

scripts/setup_database.py now tracks compensating actions when it creates the database or application role. If a later step fails, the script replays those rollback actions (dropping the newly created database or role and revoking grants) before exiting. Dry runs never register rollback steps and remain read-only.

If the script reports that some rollback steps could not complete—for example because a connection cannot be established—rerun the script with --dry-run to confirm the desired end state and then apply the outstanding cleanup manually:

python scripts/setup_database.py --ensure-database --ensure-role --dry-run -v

# Manual cleanup examples when automation cannot connect
psql -d postgres -c "DROP DATABASE IF EXISTS calminer"
psql -d postgres -c "DROP ROLE IF EXISTS calminer"

After a failure and rollback, rerun the full setup once the environment issues are resolved.

CI pipeline environment

The .gitea/workflows/test.yml job spins up a temporary PostgreSQL 16 container and runs the setup script twice: once with --dry-run to validate the plan and again without it to apply migrations and seeds. No external secrets are required; the workflow sets the following environment variables for both invocations and for pytest:

Variable Value Purpose
DATABASE_DRIVER postgresql Signals the driver to the setup script
DATABASE_HOST postgres Hostname of the Postgres job service container
DATABASE_PORT 5432 Default service port
DATABASE_NAME calminer_ci Target database created by the workflow
DATABASE_USER calminer Application role used during tests
DATABASE_PASSWORD secret Password for both admin and app role
DATABASE_SCHEMA public Default schema for the tests
DATABASE_SUPERUSER calminer Setup script uses the same role for admin actions
DATABASE_SUPERUSER_PASSWORD secret Matches the Postgres service password
DATABASE_SUPERUSER_DB calminer_ci Database to connect to for admin operations

The workflow also updates DATABASE_URL for pytest to point at the CI Postgres instance. Existing tests continue to work unchanged, since SQLAlchemy reads the URL exactly as it does locally.

Because the workflow provisions everything inline, no repository or organization secrets need to be configured for basic CI runs. If you later move the setup step to staging or production pipelines, replace these inline values with secrets managed by the CI platform. When running on self-hosted runners behind an HTTP proxy or apt cache, ensure Playwright dependencies and OS packages inherit the same proxy settings that the workflow configures prior to installing browsers.

Staging environment workflow

Use the staging checklist in docs/staging_environment_setup.md when running the setup script against the shared environment. A sample variable file (config/setup_staging.env) records the expected inputs (host, port, admin/application roles); copy it outside the repository or load the values securely via your shell before executing the workflow.

Recommended execution order:

  1. Dry run with --dry-run -v to confirm connectivity and review planned operations. Capture the output to reports/setup_staging_dry_run.log (or similar) for auditing.
  2. Execute the live run with the same flags minus --dry-run to provision the database, role grants, migrations, and seed data. Save the log as reports/setup_staging_apply.log.
  3. Repeat the dry run to verify idempotency and record the result (for example reports/setup_staging_post_apply.log).

Record any issues in .github/instructions/TODO.md or .github/instructions/DONE.TODO.md as appropriate so the team can track follow-up actions.

Database Objects

The database contains tables such as capex, opex, chemical_consumption, fuel_consumption, water_consumption, scrap_consumption, production_output, equipment_operation, ore_batch, exchange_rate, and simulation_result.

Current implementation status (2025-10-21)

  • Currency normalization: a currency table and backfill scripts exist; routes accept currency_id and currency_code for compatibility.
  • Simulation engine: scaffolding in services/simulation.py and /api/simulations/run return in-memory results; persistence to models/simulation_result is planned.
  • Reporting: services/reporting.py provides summary statistics used by POST /api/reporting/summary.
  • Tests & coverage: unit and E2E suites exist; recent local coverage is >90%.
  • Remaining work: authentication, persist simulation runs, CI/CD and containerization.

Where to look next