Add UI and styling documentation; remove idempotency and logging audits
Some checks failed
CI / test (pull_request) Failing after 1m8s

- Introduced a new document outlining UI structure, reusable template components, CSS variable conventions, and per-page data/actions for the CalMiner application.
- Removed outdated idempotency audit and logging audit documents as they are no longer relevant.
- Updated quickstart guide to streamline developer setup instructions and link to relevant documentation.
- Created a roadmap document detailing scenario enhancements and data management strategies.
- Deleted the seed data plan document to consolidate information into the setup process.
- Refactored setup_database.py for improved logging and error handling during database setup and migration processes.
This commit is contained in:
2025-10-29 13:20:44 +01:00
parent 1f58de448c
commit 04d7f202b6
19 changed files with 609 additions and 752 deletions

View File

@@ -1,348 +1,87 @@
# Quickstart & Expanded Project Documentation
# Developer Quickstart
This document contains the expanded development, usage, testing, and migration guidance moved out of the top-level README for brevity.
- [Developer Quickstart](#developer-quickstart)
- [Development](#development)
- [User Interface](#user-interface)
- [Testing](#testing)
- [Staging](#staging)
- [Deployment](#deployment)
- [Using Docker Compose](#using-docker-compose)
- [Manual Docker Deployment](#manual-docker-deployment)
- [Database Deployment \& Migrations](#database-deployment--migrations)
- [Usage Overview](#usage-overview)
- [Theme configuration](#theme-configuration)
- [Where to look next](#where-to-look-next)
This document provides a quickstart guide for developers to set up and run the CalMiner application locally.
## Development
### Prerequisites
See [Development Setup](docs/developer/development_setup.md).
- Python 3.10+
- Node.js 20+ (for Playwright-driven E2E tests)
- Docker (optional, required for containerized workflows)
- Git
### User Interface
To get started locally:
There is a dedicated [UI and Style](docs/developer/ui_and_style.md) guide for frontend contributors.
```powershell
# Clone the repository
git clone https://git.allucanget.biz/allucanget/calminer.git
cd calminer
### Testing
# Create and activate a virtual environment
python -m venv .venv
.\.venv\Scripts\Activate.ps1
Testing is described in the [Testing CI](docs/architecture/07_deployment/07_01_testing_ci.md) document.
# Install dependencies
pip install -r requirements.txt
## Staging
# Start the development server
uvicorn main:app --reload
Staging environment setup is covered in [Staging Environment Setup](docs/developer/staging_environment_setup.md).
## Deployment
The application can be deployed using Docker containers.
### Using Docker Compose
For production deployment, use the provided `docker-compose.yml`:
```bash
docker-compose up -d
```
## Docker-based setup
This starts the FastAPI app and PostgreSQL database.
To build and run the application using Docker instead of a local Python environment:
### Manual Docker Deployment
```powershell
# Build the application image (multi-stage build keeps runtime small)
docker build -t calminer:latest .
Build and run the container manually:
# Start the container on port 8000
docker run --rm -p 8000:8000 calminer:latest
# Supply environment variables (e.g., Postgres connection)
docker run --rm -p 8000:8000 ^
-e DATABASE_DRIVER="postgresql" ^
-e DATABASE_HOST="db.host" ^
-e DATABASE_PORT="5432" ^
-e DATABASE_USER="calminer" ^
-e DATABASE_PASSWORD="s3cret" ^
-e DATABASE_NAME="calminer" ^
-e DATABASE_SCHEMA="public" ^
calminer:latest
```bash
docker build -t calminer .
docker run -d -p 8000:8000 \
-e DATABASE_HOST=your-postgres-host \
-e DATABASE_USER=calminer \
-e DATABASE_PASSWORD=your-password \
-e DATABASE_NAME=calminer_db \
calminer
```
If you maintain a Postgres or Redis dependency locally, consider authoring a `docker compose` stack that pairs them with the app container. The Docker image expects the database to be reachable and migrations executed before serving traffic.
Ensure the database is set up and migrated before running.
### Compose-driven development stack
### Database Deployment & Migrations
The repository ships with `docker-compose.dev.yml`, wiring the API and database into a single development stack. It defaults to the Debian-based `postgres:16` image so UTF-8 locales are available without additional tooling and mounts persistent data in the `pg_data_dev` volume.
Typical workflow (run from the repository root):
```powershell
# Build images and ensure dependencies are cached
docker compose -f docker-compose.dev.yml build
# Start FastAPI and Postgres in the background
docker compose -f docker-compose.dev.yml up -d
# Tail logs for both services
docker compose -f docker-compose.dev.yml logs -f
# Stop services but keep the database volume for reuse
docker compose -f docker-compose.dev.yml down
# Remove the persistent Postgres volume when you need a clean slate
docker volume rm calminer_pg_data_dev # optional; confirm exact name with `docker volume ls`
```
Environment variables used by the containers live directly in the compose file (`DATABASE_HOST=db`, `DATABASE_NAME=calminer_dev`, etc.), so no extra `.env` file is required. Adjust or override them via `docker compose ... -e VAR=value` if necessary.
For a deeper walkthrough (including volume naming conventions, port mappings, and how the stack fits into the broader architecture), cross-check `docs/architecture/15_development_setup.md`. That chapter mirrors the compose defaults captured here so both documents stay in sync.
### Compose-driven test stack
Use `docker-compose.test.yml` to spin up a Postgres 16 container and execute the Python test suite in a disposable worker container:
```powershell
# Build images used by the test workflow
docker compose -f docker-compose.test.yml build
# Run the default target (unit tests)
docker compose -f docker-compose.test.yml run --rm tests
# Run a specific target (e.g., full suite)
docker compose -f docker-compose.test.yml run --rm -e PYTEST_TARGET=tests tests
# Tear everything down and drop the test database volume
docker compose -f docker-compose.test.yml down -v
```
The `tests` service prepares the database via `scripts/setup_database.py` before invoking pytest, ensuring migrations and seed data mirror CI behaviour. Named volumes (`pip_cache_test`, `pg_data_test`) cache dependencies and data between runs; remove them with `down -v` whenever you want a pristine environment. An `api` service is available on `http://localhost:8001` for spot-checking API responses against the same test database.
### Compose-driven production stack
Use `docker-compose.prod.yml` for operator-managed deployments. The file defines:
- `api`: FastAPI container with configurable CPU/memory limits and a `/health` probe.
- `traefik`: Optional (enable with the `reverse-proxy` profile) to terminate TLS and route traffic based on `CALMINER_DOMAIN`.
- `postgres`: Optional (enable with the `local-db` profile) when a managed database is unavailable; persists data in `pg_data_prod` and mounts `./backups`.
Commands (run from the repository root):
```powershell
# Prepare environment variables once per environment
copy config\setup_production.env.example config\setup_production.env
# Start API behind Traefik
docker compose ^
--env-file config/setup_production.env ^
-f docker-compose.prod.yml ^
--profile reverse-proxy ^
up -d
# Add the local Postgres profile when running without managed DB
docker compose ^
--env-file config/setup_production.env ^
-f docker-compose.prod.yml ^
--profile reverse-proxy --profile local-db ^
up -d
# Apply migrations/seed data
docker compose ^
--env-file config/setup_production.env ^
-f docker-compose.prod.yml ^
run --rm api ^
python scripts/setup_database.py --run-migrations --seed-data
# Check health (FastAPI exposes /health)
docker compose -f docker-compose.prod.yml ps
# Stop services (volumes persist unless -v is supplied)
docker compose -f docker-compose.prod.yml down
```
Key environment variables (documented in `config/setup_production.env.example`): container image tag, domain/ACME email, published ports, network name, and resource limits (`API_LIMIT_CPUS`, `API_LIMIT_MEMORY`, etc.).
For deployment topology diagrams and operational sequencing, see [docs/architecture/07_deployment_view.md](architecture/07_deployment_view.md#production-docker-compose-topology).
See the [Database Deployment & Migrations](docs/architecture/07_deployment/07_02_database_deployment_migrations.md) document for details on database deployment and migration strategies.
## Usage Overview
- **Run the application**: Follow the [Development Setup](docs/developer/development_setup.md) to get the application running locally.
- **Access the UI**: Open your web browser and navigate to `http://localhost:8000/ui` to access the user interface.
- **API base URL**: `http://localhost:8000/api`
- Key routes include creating scenarios, parameters, costs, consumption, production, equipment, maintenance, and reporting summaries. See the `routes/` directory for full details.
- Key routes include creating scenarios, parameters, costs, consumption, production, equipment, maintenance, and reporting summaries. See the `routes/` directory for full details.
- **UI base URL**: `http://localhost:8000/ui`
### Theme configuration
- Open `/ui/settings` to access the Settings dashboard. The **Theme Colors** form lists every CSS variable persisted in the `application_setting` table. Updates apply immediately across the UI once saved.
- Use the accompanying API endpoints for automation or integration tests:
- `GET /api/settings/css` returns the active variables, defaults, and metadata describing any environment overrides.
- `PUT /api/settings/css` accepts a payload such as `{"variables": {"--color-primary": "#112233"}}` and persists the change unless an environment override is in place.
- Environment variables prefixed with `CALMINER_THEME_` win over database values. For example, setting `CALMINER_THEME_COLOR_PRIMARY="#112233"` renders the corresponding input read-only and surfaces the override in the Environment Overrides table.
- Acceptable values include hex (`#rrggbb` or `#rrggbbaa`), `rgb()/rgba()`, and `hsl()/hsla()` expressions with the expected number of components. Invalid inputs trigger a validation error and the API responds with HTTP 422.
## Dashboard Preview
1. Start the FastAPI server and navigate to `/`.
2. Review the headline metrics, scenario snapshot table, and cost/activity charts sourced from the current database state.
3. Use the "Refresh Dashboard" button to pull freshly aggregated data via `/ui/dashboard/data` without reloading the page.
## Testing
Run the unit test suite:
```powershell
pytest
```
E2E tests use Playwright and a session-scoped `live_server` fixture that starts the app at `http://localhost:8001` for browser-driven tests.
## Migrations & Baseline
A consolidated baseline migration (`scripts/migrations/000_base.sql`) captures all schema changes required for a fresh installation. The script is idempotent: it creates the `currency` and `measurement_unit` reference tables, provisions the `application_setting` store for configurable UI/system options, ensures consumption and production records expose unit metadata, and enforces the foreign keys used by CAPEX and OPEX.
Configure granular database settings in your PowerShell session before running migrations:
```powershell
$env:DATABASE_DRIVER = 'postgresql'
$env:DATABASE_HOST = 'localhost'
$env:DATABASE_PORT = '5432'
$env:DATABASE_USER = 'calminer'
$env:DATABASE_PASSWORD = 's3cret'
$env:DATABASE_NAME = 'calminer'
$env:DATABASE_SCHEMA = 'public'
python scripts/setup_database.py --run-migrations --seed-data --dry-run
python scripts/setup_database.py --run-migrations --seed-data
```
The dry-run invocation reports which steps would execute without making changes. The live run applies the baseline (if not already recorded in `schema_migrations`) and seeds the reference data relied upon by the UI and API.
> When `--seed-data` is supplied without `--run-migrations`, the bootstrap script automatically applies any pending SQL migrations first so the `application_setting` table (and future settings-backed features) are present before seeding.
>
> The application still accepts `DATABASE_URL` as a fallback if the granular variables are not set.
## Database bootstrap workflow
Provision or refresh a database instance with `scripts/setup_database.py`. Populate the required environment variables (an example lives at `config/setup_test.env.example`) and run:
```powershell
# Load test credentials (PowerShell)
Get-Content .\config\setup_test.env.example |
ForEach-Object {
if ($_ -and -not $_.StartsWith('#')) {
$name, $value = $_ -split '=', 2
Set-Item -Path Env:$name -Value $value
}
}
# Dry-run to inspect the planned actions
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v
# Execute the full workflow
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v
```
Typical log output confirms:
- Admin and application connections succeed for the supplied credentials.
- Database and role creation are idempotent (`already present` when rerun).
- SQLAlchemy metadata either reports missing tables or `All tables already exist`.
- Migrations list pending files and finish with `Applied N migrations` (a new database reports `Applied 1 migrations` for `000_base.sql`).
After a successful run the target database contains all application tables plus `schema_migrations`, and that table records each applied migration file. New installations only record `000_base.sql`; upgraded environments retain historical entries alongside the baseline.
### Local Postgres via Docker Compose
For local validation without installing Postgres directly, use the provided compose file:
```powershell
docker compose -f docker-compose.postgres.yml up -d
```
#### Summary
1. Start the Postgres container with `docker compose -f docker-compose.postgres.yml up -d`.
2. Export the granular database environment variables (host `127.0.0.1`, port `5433`, database `calminer_local`, user/password `calminer`/`secret`).
3. Run the setup script twice: first with `--dry-run` to preview actions, then without it to apply changes.
4. When finished, stop and optionally remove the container/volume using `docker compose -f docker-compose.postgres.yml down`.
The service exposes Postgres 16 on `localhost:5433` with database `calminer_local` and role `calminer`/`secret`. When the container is running, set the granular environment variables before invoking the setup script:
```powershell
$env:DATABASE_DRIVER = 'postgresql'
$env:DATABASE_HOST = '127.0.0.1'
$env:DATABASE_PORT = '5433'
$env:DATABASE_USER = 'calminer'
$env:DATABASE_PASSWORD = 'secret'
$env:DATABASE_NAME = 'calminer_local'
$env:DATABASE_SCHEMA = 'public'
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v
```
When testing is complete, shut down the container (and optional persistent volume) with:
```powershell
docker compose -f docker-compose.postgres.yml down
docker volume rm calminer_postgres_local_postgres_data # optional cleanup
```
### Seeding reference data
`scripts/seed_data.py` provides targeted control over the baseline datasets when the full setup script is not required:
```powershell
python scripts/seed_data.py --currencies --units --dry-run
python scripts/seed_data.py --currencies --units
```
The seeder upserts the canonical currency catalog (`USD`, `EUR`, `CLP`, `RMB`, `GBP`, `CAD`, `AUD`) using ASCII-safe symbols (`USD$`, `EUR`, etc.) and the measurement units referenced by the UI (`tonnes`, `kilograms`, `pounds`, `liters`, `cubic_meters`, `kilowatt_hours`). The setup script invokes the same seeder when `--seed-data` is provided and verifies the expected rows afterward, warning if any are missing or inactive.
### Rollback guidance
`scripts/setup_database.py` now tracks compensating actions when it creates the database or application role. If a later step fails, the script replays those rollback actions (dropping the newly created database or role and revoking grants) before exiting. Dry runs never register rollback steps and remain read-only.
If the script reports that some rollback steps could not complete—for example because a connection cannot be established—rerun the script with `--dry-run` to confirm the desired end state and then apply the outstanding cleanup manually:
```powershell
python scripts/setup_database.py --ensure-database --ensure-role --dry-run -v
# Manual cleanup examples when automation cannot connect
psql -d postgres -c "DROP DATABASE IF EXISTS calminer"
psql -d postgres -c "DROP ROLE IF EXISTS calminer"
```
After a failure and rollback, rerun the full setup once the environment issues are resolved.
### CI pipeline environment
The `.gitea/workflows/test.yml` job spins up a temporary PostgreSQL 16 container and runs the setup script twice: once with `--dry-run` to validate the plan and again without it to apply migrations and seeds. No external secrets are required; the workflow sets the following environment variables for both invocations and for pytest:
| Variable | Value | Purpose |
| ----------------------------- | ------------- | ------------------------------------------------- |
| `DATABASE_DRIVER` | `postgresql` | Signals the driver to the setup script |
| `DATABASE_HOST` | `postgres` | Hostname of the Postgres job service container |
| `DATABASE_PORT` | `5432` | Default service port |
| `DATABASE_NAME` | `calminer_ci` | Target database created by the workflow |
| `DATABASE_USER` | `calminer` | Application role used during tests |
| `DATABASE_PASSWORD` | `secret` | Password for both admin and app role |
| `DATABASE_SCHEMA` | `public` | Default schema for the tests |
| `DATABASE_SUPERUSER` | `calminer` | Setup script uses the same role for admin actions |
| `DATABASE_SUPERUSER_PASSWORD` | `secret` | Matches the Postgres service password |
| `DATABASE_SUPERUSER_DB` | `calminer_ci` | Database to connect to for admin operations |
The workflow also updates `DATABASE_URL` for pytest to point at the CI Postgres instance. Existing tests continue to work unchanged, since SQLAlchemy reads the URL exactly as it does locally.
Because the workflow provisions everything inline, no repository or organization secrets need to be configured for basic CI runs. If you later move the setup step to staging or production pipelines, replace these inline values with secrets managed by the CI platform. When running on self-hosted runners behind an HTTP proxy or apt cache, ensure Playwright dependencies and OS packages inherit the same proxy settings that the workflow configures prior to installing browsers.
### Staging environment workflow
Use the staging checklist in `docs/staging_environment_setup.md` when running the setup script against the shared environment. A sample variable file (`config/setup_staging.env`) records the expected inputs (host, port, admin/application roles); copy it outside the repository or load the values securely via your shell before executing the workflow.
Recommended execution order:
1. Dry run with `--dry-run -v` to confirm connectivity and review planned operations. Capture the output to `reports/setup_staging_dry_run.log` (or similar) for auditing.
2. Execute the live run with the same flags minus `--dry-run` to provision the database, role grants, migrations, and seed data. Save the log as `reports/setup_staging_apply.log`.
3. Repeat the dry run to verify idempotency and record the result (for example `reports/setup_staging_post_apply.log`).
## Database Objects
The database contains tables such as `capex`, `opex`, `chemical_consumption`, `fuel_consumption`, `water_consumption`, `scrap_consumption`, `production_output`, `equipment_operation`, `ore_batch`, `exchange_rate`, and `simulation_result`.
## Current implementation status (2025-10-21)
- Currency normalization: a `currency` table and backfill scripts exist; routes accept `currency_id` and `currency_code` for compatibility.
- Simulation engine: scaffolding in `services/simulation.py` and `/api/simulations/run` return in-memory results; persistence to `models/simulation_result` is planned.
- Reporting: `services/reporting.py` provides summary statistics used by `POST /api/reporting/summary`.
- Tests & coverage: unit and E2E suites exist; recent local coverage is >90%.
- Remaining work: authentication, persist simulation runs, CI/CD and containerization.
Theming is laid out in [Theming](docs/architecture/05_03_theming.md).
## Where to look next
- Architecture overview & chapters: [architecture](architecture/README.md) (per-chapter files under `docs/architecture/`)
- [Testing & CI](architecture/07_deployment/07_01_testing_ci.md.md)
- [Development setup](architecture/15_development_setup.md)
- [Development setup](developer/development_setup.md)
- Implementation plan & roadmap: [Solution strategy](architecture/04_solution_strategy.md)
- Routes: [routes](../routes/)
- Services: [services](../services/)