Compare commits

...

6 Commits

Author SHA1 Message Date
f18987ec8b docs: add Coolify deployment reference and configuration details 2025-11-15 13:21:59 +01:00
4dea0a9ae1 Add detailed SQLAlchemy models, navigation metadata, enumerations, Pydantic schemas, monitoring, and auditing documentation
- Introduced SQLAlchemy models for user management, project management, financial inputs, and pricing configuration.
- Created navigation metadata tables for sidebar and top-level menus.
- Cataloged enumerations used across ORM models and Pydantic schemas.
- Documented Pydantic schemas for API request/response validation, including authentication, project, scenario, import, and export schemas.
- Added monitoring and auditing tables for performance metrics and import/export logs.
- Updated security documentation to reflect changes in data model references.
2025-11-13 20:23:09 +01:00
07e68a553d Enhance documentation for data model and import/export processes
- Updated data model documentation to clarify relationships between projects, scenarios, and profitability calculations.
- Introduced a new guide for data import/export templates, detailing CSV and Excel workflows for profitability, capex, and opex data.
- Created a comprehensive field inventory for data import/export, outlining input fields, derived outputs, and snapshot columns.
- Renamed "Initial Capex Planner" to "Capex Planner" and "Processing Opex Planner" to "Opex Planner" for consistency across user guides.
- Adjusted access paths and related resources in user guides to reflect the new naming conventions.
- Improved clarity and consistency in descriptions and instructions throughout the user documentation.
2025-11-13 14:10:47 +01:00
fb6be6d84f feat: add functional requirements for profitability, Monte Carlo simulation, and Capex/Opex management; enhance user guide with planners 2025-11-13 09:20:10 +01:00
d3597bc8c9 feat: enhance data model documentation with detailed entity descriptions and relationships 2025-11-12 18:17:39 +01:00
747f430562 fix: update database initialization and seeding instructions in installation guide 2025-11-12 18:17:20 +01:00
23 changed files with 1895 additions and 590 deletions

View File

@@ -42,22 +42,40 @@ Once you are satisfied with your changes, submit a pull request to the main repo
## Continuous Integration
Calminer uses Gitea Actions for automated testing, linting, and deployment. The CI pipeline is defined in `.gitea/workflows/cicache.yml` and runs on pushes and pull requests to the `main` and `develop` branches.
Calminer uses Gitea Actions for automated testing, linting, and deployment. The pipeline is orchestrated by `.gitea/workflows/ci.yml`, which delegates to reusable stage workflows that run entirely on the self-hosted Gitea runner:
### Pipeline Stages
1. **Lint**: Checks code style with Ruff and Black.
2. **Test**: Runs pytest with coverage enforcement (80% threshold), using a PostgreSQL service. Uploads coverage.xml and pytest-report.xml artifacts.
3. **Build**: Builds Docker image and pushes to registry only on `main` branch pushes (not PRs) if registry secrets are configured.
- `.gitea/workflows/ci-lint.yml` checks out the repository, installs dependencies, and runs Ruff, Black, and Bandit.
- `.gitea/workflows/ci-test.yml` provisions PostgreSQL 17, installs dependencies, executes pytest with an 80% coverage threshold, and uploads `coverage.xml` plus `pytest-report.xml` artifacts.
- `.gitea/workflows/ci-build.yml` builds the Docker image, pushing tags only for `main` branch pushes (never for pull requests) when registry credentials are configured. It retains the optional deployment job gated by commit messages (`[deploy staging]`, `[deploy production]`). The workflow validates that the configured registry host matches the current Gitea instance to avoid publishing to the wrong registry.
- `.gitea/workflows/deploy-coolify.yml` runs on `push` events to `main` (and via manual dispatch) once the repository is checked out. It bundles `docker-compose.prod.yml`, requires Coolify secrets (`COOLIFY_BASE_URL`, `COOLIFY_API_TOKEN`, `COOLIFY_APPLICATION_ID`, optional `COOLIFY_DEPLOY_ENV`), and calls the Coolify `/api/v1/deploy` endpoint with the application UUID.
### Workflow Behavior
- Triggers on push/PR to `main` or `develop`.
- Linting must pass before tests run.
- Tests must pass before build runs.
- Coverage below 80% fails the test stage.
- Artifacts are available for PR inspection.
- Docker push occurs only for main branch commits with valid registry credentials.
- `CI - Lint` triggers on push/PR to `main`, `develop`, or `v2` and must succeed before downstream workflows start.
- `CI - Test` and `CI - Build` run as dependent jobs within the orchestrating workflow, so artifacts and images reflect the linted commit.
- Coverage below 80% fails the test workflow and stops the build sequence.
- Artifacts remain available for PR inspection via the test workflow.
- Docker pushes occur only for main-branch pushes (PRs will never publish images). Deployments run exclusively on `main` pushes and require explicit `[deploy staging]` / `[deploy production]` markers.
- Coolify automation also runs only for `main` pushes and relies on the configured application UUID to target the correct deployment.
### Gitea Runner and Registry Configuration
The runner and registry are self-hosted within Gitea. Configure the following secrets in the repository settings before enabling the workflows:
- `REGISTRY_URL`: Base URL of the Gitea container registry (for example `https://git.example.com`).
- `REGISTRY_USERNAME` / `REGISTRY_PASSWORD`: Credentials with permission to push images to the project namespace.
- `COOLIFY_BASE_URL`: Base URL to the Coolify instance (no trailing slash).
- `COOLIFY_API_TOKEN`: Personal API token from Coolify with deploy permission.
- `COOLIFY_APPLICATION_ID`: Coolify application UUID (see **Applications → Settings → UUID** in the Coolify UI).
- `COOLIFY_DEPLOY_ENV` (optional): Multiline environment file content; omit if `deploy/.env` is stored in the repo.
- `KUBE_CONFIG`, `STAGING_KUBE_CONFIG`, `PROD_KUBE_CONFIG` (optional): Base64 encoded kubeconfig for the optional k8s deployment job.
- `K8S_DEPLOY_ENABLED` (optional): Set to `true` to allow the CI deploy job to run Kubernetes steps; leave unset/any other value to skip k8s updates (Coolify deploy still runs).
Tips:
- Ensure DNS records resolve the registry host for the runner network. Runner logs such as `logs/ci-lint-399.log` will show failures if the host cannot be resolved.
- The workflows prefer `GITEA_*` environment variables when available (for example `GITEA_REF`, `GITEA_SHA`) and fall back to GitHub-compatible names, guaranteeing operation on a native Gitea runner.
- If you change the registry endpoint, update `REGISTRY_URL` and verify the workflow warning emitted by the “Validate registry configuration” step to avoid pushing images to the wrong registry.
### Local Testing

View File

@@ -65,37 +65,36 @@ Before you begin, ensure that you have the following prerequisites installed on
Once the containers are up and running, you can access the Calminer application by navigating to `http://localhost:8003` in your web browser.
If you are running the application on a remote server, replace `localhost` with the server's IP address or domain name.
5. **Database Initialization**
5. **Database Initialization & Seed Data**
The application container executes `/app/scripts/docker-entrypoint.sh` before launching the API. This entrypoint runs `python -m scripts.run_migrations`, which applies all Alembic migrations and keeps the schema current on every startup. No additional action is required when using Docker Compose, but you can review the logs to confirm the migrations completed successfully.
On container startup the FastAPI application executes `scripts.init_db` during the first `startup` event. This idempotent initializer performs the following steps:
For local development without Docker, run the same command after setting your environment variables:
- ensures PostgreSQL enum types exist (no duplicates are created),
- creates the core tables required for authentication, pricing, projects, scenarios, financial inputs, and simulation parameters,
- seeds default roles, admin user, pricing settings, and demo projects/scenarios/financial inputs using `INSERT ... ON CONFLICT DO NOTHING` semantics.
No additional action is required when using Docker Compose—the `uvicorn` process runs the initializer automatically before the bootstrap routines.
For local development without Docker, run the initializer manually after exporting your environment variables:
```bash
# activate your virtualenv first
python -m scripts.run_migrations
python -m scripts.init_db
```
The script is idempotent; it will only apply pending migrations.
Running the script multiple times is safe; it will reconcile records without duplicating data.
6. **Seed Default Accounts and Roles**
6. **Resetting the Schema (optional)**
After the schema is in place, run the initial data seeding utility so the default roles and administrator account exist:
To rebuild the schema from a clean slate in development, use the reset utility which drops the managed tables and enum types before rerunning `init_db`:
```bash
# activate your virtualenv first
python -m scripts.00_initial_data
CALMINER_ENV=development python -m scripts.reset_db
python -m scripts.init_db
```
The script reads the standard database environment variables (see below) and supports the following overrides:
- `CALMINER_SEED_ADMIN_EMAIL` (default `admin@calminer.local` for dev, `admin@calminer.com` for prod)
- `CALMINER_SEED_ADMIN_USERNAME` (default `admin`)
- `CALMINER_SEED_ADMIN_PASSWORD` (default `ChangeMe123!` — change in production)
- `CALMINER_SEED_ADMIN_ROLES` (comma list, always includes `admin`)
- `CALMINER_SEED_FORCE` (`true` to rotate the admin password on every run)
You can rerun the script safely; it updates existing roles and user details without creating duplicates.
The reset utility refuses to run when `CALMINER_ENV` indicates a production or staging environment, providing an extra safeguard against accidental destructive operations.
### Export Dependencies

View File

@@ -0,0 +1,41 @@
# Coolify Deployment Reference
This note captures the current production deployment inputs so the automated Coolify workflow can be wired up consistently.
## Compose bundle
- Use `docker-compose.prod.yml` as the base service definition.
- The compose file expects the following variables supplied by the deployment runner (either through an `.env` file or Coolify secret variables):
- `DATABASE_HOST`
- `DATABASE_PORT` (defaults to `5432` if omitted)
- `DATABASE_USER`
- `DATABASE_PASSWORD`
- `DATABASE_NAME`
- Optional `APT_CACHE_URL` build arg when a proxy is required.
- Expose port `8003` for the FastAPI service and `5432` for PostgreSQL.
## Runtime expectations
- Application container runs with `ENVIRONMENT=production`, `LOG_LEVEL=WARNING`, and enables large import/export limits via:
- `CALMINER_EXPORT_MAX_ROWS`
- `CALMINER_IMPORT_MAX_ROWS`
- `CALMINER_EXPORT_METADATA`
- `CALMINER_IMPORT_STAGING_TTL`
- PostgreSQL container requires persistent storage (volume `postgres_data`).
- The app health check hits `http://localhost:8003/health`.
## Coolify integration inputs (to provision secrets later)
- Coolify instance URL (e.g. `https://coolify.example.com`) and target project/app identifier.
- API token or CLI credentials with permission to trigger deployments.
- SSH key or repository token already configured in Coolify to pull this repository.
- Optional: webhook/event endpoint if we want to observe deployment status from CI.
## Manual deployment checklist (current state)
1. Ensure the compose environment variables above are defined in Coolify under the application settings.
2. Rebuild the application image with the latest commit (using `docker-compose.prod.yml`).
3. Trigger a deployment in Coolify so the platform pulls the new image and restarts the service.
4. Confirm the health check passes and review logs in Coolify.
These details will feed into the new `deploy-coolify` workflow so it can authenticate, trigger the deployment, and surface logs automatically.

View File

@@ -1,3 +1,22 @@
# API Documentation
<!-- TODO: Add API documentation -->
## Project & Scenario Endpoints
| Method | Path | Roles | Success | Common Errors | Notes |
| --- | --- | --- | --- | --- | --- |
| `GET` | `/projects` | viewer, analyst, project_manager, admin | 200 + `ProjectRead[]` | 401 unauthenticated, 403 insufficient role | Lists all projects visible to the caller. |
| `POST` | `/projects` | project_manager, admin | 201 + `ProjectRead` | 401, 403, 409 name conflict, 422 validation | Creates a project and seeds default pricing settings. |
| `GET` | `/projects/{project_id}` | viewer, analyst, project_manager, admin | 200 + `ProjectRead` | 401, 403, 404 missing project | Returns a single project by id. |
| `PUT` | `/projects/{project_id}` | project_manager, admin | 200 + `ProjectRead` | 401, 403, 404, 422 | Updates mutable fields (name, location, operation_type, description). |
| `DELETE` | `/projects/{project_id}` | project_manager, admin | 204 | 401, 403, 404 | Removes the project and cascading scenarios. |
| `GET` | `/projects/{project_id}/scenarios` | viewer, analyst, project_manager, admin | 200 + `ScenarioRead[]` | 401, 403, 404 | Lists scenarios that belong to the project. |
| `POST` | `/projects/{project_id}/scenarios` | project_manager, admin | 201 + `ScenarioRead` | 401, 403, 404 missing project, 409 duplicate name, 422 validation | Creates a scenario under the project. Currency defaults to pricing metadata when omitted. |
| `POST` | `/projects/{project_id}/scenarios/compare` | viewer, analyst, project_manager, admin | 200 + `ScenarioComparisonResponse` | 401, 403, 404, 422 mismatch/validation | Validates a list of scenario ids before comparison. |
| `GET` | `/scenarios/{scenario_id}` | viewer, analyst, project_manager, admin | 200 + `ScenarioRead` | 401, 403, 404 | Retrieves a single scenario. |
| `PUT` | `/scenarios/{scenario_id}` | project_manager, admin | 200 + `ScenarioRead` | 401, 403, 404, 422 | Updates scenario metadata (status, dates, currency, resource). |
| `DELETE` | `/scenarios/{scenario_id}` | project_manager, admin | 204 | 401, 403, 404 | Removes the scenario and dependent data. |
## OpenAPI Reference
- Interactive docs: `GET /docs`
- Raw schema (JSON): `GET /openapi.json`

View File

@@ -48,10 +48,6 @@ CalMiner aims to provide a comprehensive platform for mining project scenario an
- [Data Synchronization](#data-synchronization)
- [Error Handling](#error-handling)
- [Reporting and Analytics](#reporting-and-analytics)
- [Data Visualization](#data-visualization)
- [Custom Reports](#custom-reports)
- [Real-time Analytics](#real-time-analytics)
- [Historical Analysis](#historical-analysis)
## Multitenancy
@@ -71,11 +67,27 @@ Each tenant can have customized settings and preferences, managed through the Ma
## Data Model
The system employs a relational data model to manage and store information efficiently. The primary entities include Users, Projects, Datasets, and Results. Each entity is represented as a table in the database, with relationships defined through foreign keys.
The system employs a relational data model to manage and store information efficiently. Core entities include Users, Projects, Scenarios, Financial Inputs, Simulation Parameters, Pricing Settings, and supporting metadata tables. Relationships are enforced using foreign keys with cascading deletes where appropriate (e.g., scenarios cascade from projects, financial inputs cascade from scenarios).
A detailed [Data Model](08_concepts/02_data_model.md) documentation is available.
Domain-specific enumerations are mapped to PostgreSQL enum types and mirrored in the ORM layer via `models.enums`:
Discounted cash-flow metrics (NPV, IRR, Payback) referenced by the economic portion of the data model are described in the [Financial Metrics Specification](../specifications/financial_metrics.md), while stochastic extensions leverage the Monte Carlo engine documented in the [Monte Carlo Simulation Specification](../specifications/monte_carlo_simulation.md).
- `MiningOperationType` — categorises projects (open pit, underground, in-situ leach, etc.).
- `ScenarioStatus` — tracks lifecycle state (draft, active, archived).
- `FinancialCategory` and `CostBucket` — classify financial inputs for reporting granularity.
- `DistributionType` and `StochasticVariable` — describe probabilistic simulation parameters.
- `ResourceType` — annotates primary resource consumption for scenarios and stochastic parameters.
These enums back the relevant model columns through named SQLAlchemy `Enum` definitions referencing the same PostgreSQL type names, ensuring parity between application logic and database constraints.
Schema creation and baseline seed data are handled by the idempotent initializer (`scripts/init_db.py`). On every application startup the initializer guarantees that:
1. Enum types exist exactly once.
2. Tables for authentication, pricing, and mining domain entities are present.
3. Default roles, administrator account, pricing settings, and representative demo projects/scenarios/financial inputs are available for immediate use.
Developers can reset to a clean slate with `scripts/reset_db.py`, which safely drops managed tables and enum types in non-production environments before rerunning the initializer.
A detailed [Data Model](08_concepts/02_data_model.md) documentation is available. Discounted cash-flow metrics (NPV, IRR, Payback) referenced by the economic portion of the data model are described in the [Financial Metrics Specification](../specifications/financial_metrics.md), while stochastic extensions leverage the Monte Carlo engine documented in the [Monte Carlo Simulation Specification](../specifications/monte_carlo_simulation.md).
All data interactions are handled through the [Data Access Layer](05_building_block_view.md#data-access-layer), ensuring consistency and integrity across operations.
@@ -229,20 +241,4 @@ Robust error handling mechanisms are implemented to manage integration failures
## Reporting and Analytics
The Calminer system includes comprehensive reporting and analytics capabilities to support data-driven decision-making.
### Data Visualization
The system provides tools for visualizing data through charts, graphs, and dashboards, making it easier to identify trends and insights.
### Custom Reports
Users can create custom reports based on specific criteria, allowing for tailored analysis of project performance and resource utilization.
### Real-time Analytics
The system supports real-time data processing and analytics, enabling users to access up-to-date information and respond quickly to changing conditions.
### Historical Analysis
The system maintains a history of key metrics and events, allowing for retrospective analysis and identification of long-term trends.
The Calminer system includes comprehensive reporting and analytics capabilities to support data-driven decision-making. Detailed [Reporting and Analytics](08_concepts/11_reporting_analytics.md) documentation is available.

View File

@@ -1,541 +1,23 @@
# Data Model
# Data Model Overview
The data model for the system is designed to capture the essential entities and their relationships involved in mining projects. Each entity is represented as a table in the relational database, with attributes defining their properties and foreign keys establishing relationships between them.
Calminers data model spans several distinct layers: persisted ORM entities, Pydantic schemas used by the API, navigation metadata, shared enumerations, and operational telemetry tables. To make the material easier to scan, the original monolithic document has been split into focused reference pages.
## Table of Contents
## Reference Structure
- [Data Model](#data-model)
- [Table of Contents](#table-of-contents)
- [Relationships Diagram](#relationships-diagram)
- [User](#user)
- [User Diagram](#user-diagram)
- [User Roles](#user-roles)
- [System Administrator](#system-administrator)
- [Mandator Administrator](#mandator-administrator)
- [Project Manager](#project-manager)
- [Standard User](#standard-user)
- [Permissions](#permissions)
- [Mandator](#mandator)
- [Project](#project)
- [Location](#location)
- [Currency](#currency)
- [Unit](#unit)
- [Mining Technology](#mining-technology)
- [Product Model](#product-model)
- [Quality Metrics](#quality-metrics)
- [Financial Model](#financial-model)
- [Cost Model](#cost-model)
- [CAPEX](#capex)
- [OPEX](#opex)
- [Revenue Model](#revenue-model)
- [Revenue Streams](#revenue-streams)
- [Product Sales](#product-sales)
- [Investment Model](#investment-model)
- [Economic Model](#economic-model)
- [Discounted Cash Flow Metrics](#discounted-cash-flow-metrics)
- [Risk Model](#risk-model)
- [Parameter](#parameter)
- [Scenario](#scenario)
- [SQLAlchemy Models](./02_data_model/01_sqlalchemy_models.md) — Domain entities that persist projects, scenarios, pricing configuration, snapshots, and supporting records.
- [Navigation Metadata](./02_data_model/02_navigation.md) — Sidebar and menu configuration tables plus seeding/runtime notes.
- [Enumerations](./02_data_model/03_enumerations.md) — Shared enum definitions used across ORM models and schemas.
- [Pydantic Schemas](./02_data_model/04_pydantic.md) — Request/response models, import/export payloads, and validation nuances.
- [Monitoring and Auditing](./02_data_model/05_monitoring.md) — Telemetry and audit tables supporting observability.
## Relationships Diagram
Each detailed page retains the original headings and tables, so existing anchors and references can migrate with minimal disruption.
```mermaid
erDiagram
User ||--o{ UserRole : has
UserRole ||--o{ RolePermission : has
Permission ||--o{ RolePermission : assigned_to
## How to Use This Overview
User ||--o{ Mandator : belongs_to
Mandator ||--o{ Project : has_many
Project ||--|| Location : located_at
Project ||--o{ MiningTechnology : uses
Project ||--o{ FinancialModel : has_many
Project ||--o{ Parameter : has_many
Project ||--o{ Scenario : has_many
- Start with the SQLAlchemy reference when you need to understand persistence concerns or relationships between core domain objects.
- Jump to the Pydantic schemas document when adjusting API payloads or validation logic.
- Consult the enumerations list before introducing new enum values to keep backend and frontend usage aligned.
- Review the navigation metadata page when seeding or modifying the application sidebar.
- Use the monitoring and auditing section to track telemetry fields that drive dashboards and compliance reporting.
ProductModel ||--o{ QualityMetric : has
FinancialModel ||--o{ CostModel : includes
FinancialModel ||--o{ RevenueModel : includes
FinancialModel ||--o{ InvestmentModel : includes
FinancialModel ||--o{ EconomicModel : includes
FinancialModel ||--o{ RiskModel : includes
MiningTechnology ||--o{ Parameter : has_many
MiningTechnology ||--o{ QualityMetric : has_many
CostModel ||--|| CAPEX : has
CostModel ||--|| OPEX : has
RevenueModel ||--o{ RevenueStream : has
RevenueModel ||--o{ ProductSale : has
Currency ||--o{ CAPEX : used_in
Currency ||--o{ OPEX : used_in
Currency ||--o{ RevenueStream : used_in
Currency ||--o{ ProductSale : used_in
Unit ||--o{ ProductModel : used_in
Unit ||--o{ QualityMetric : used_in
Unit ||--o{ Parameter : used_in
MiningTechnology ||--o{ Parameter : has_many
MiningTechnology ||--o{ QualityMetric : has_many
MiningTechnology ||--o{ MonteCarloSimulation : has_many
```
## User
Users are individuals who have access to the Calminer system.
Each User is associated with a single Mandator and can be assigned to multiple Projects under that Mandator. Users have specific roles and permissions that determine their level of access within the system.
### User Diagram
```mermaid
erDiagram
User {
string UserID PK
string Name
string Email
string PasswordHash
datetime CreatedAt
datetime UpdatedAt
datetime LastLogin
boolean IsActive
}
```
### User Roles
Users can have different roles within the Calminer system, which define their permissions and access levels.
```mermaid
erDiagram
User ||--o{ UserRole : has
UserRole {
string RoleID PK
string RoleName
string Description
}
```
#### System Administrator
The System Administrator role is a special user role that has elevated privileges across the entire Calminer system. This role is responsible for managing system settings, configurations, managing Mandators, user accounts, and overall system maintenance.
#### Mandator Administrator
The Mandator Administrator role is a special user role assigned to individuals who manage a specific Mandator. This role has the authority to oversee Users and Projects within their Mandator, including user assignments and project configurations.
#### Project Manager
The Project Manager role is responsible for overseeing specific Projects within a Mandator. This role has permissions to manage project settings, assign Users to the Project, and monitor project progress.
#### Standard User
The Standard User role can participate in Projects assigned to them but has limited access to administrative functions.
### Permissions
Permissions are defined based on user roles, determining what actions a user can perform within the system. Permissions include:
- View Projects
- Edit Projects
- Manage Users
- Configure System Settings
- Access Reports
- Run Simulations
- Manage Financial Models
- Export Data
- Import Data
- View Logs
- Manage Notifications
- Access API
Permissions are assigned to roles, and users inherit permissions based on their assigned roles. For this purpose, a helper entity `RolePermission` is defined to map roles to their respective permissions.
The Permissions entity (and RolePermission entity) is defined as follows:
```mermaid
erDiagram
RolePermission {
string RoleID FK
string PermissionID FK
}
Permission {
string PermissionID PK
string PermissionName
string Description
}
UserRole ||--o{ RolePermission : has
Permission ||--o{ RolePermission : assigned_to
```
## Mandator
The Mandator entity represents organizational units or clients using the system. A Mandator can have multiple Users and Projects associated with it.
```mermaid
erDiagram
Mandator {
string MandatorID PK
string Name
string ContactInfo
datetime CreatedAt
datetime UpdatedAt
boolean IsActive
}
```
## Project
The Project entity encapsulates the details of mining projects. Attributes include ProjectID, ProjectName, Description, LocationID (linking to the Location entity), CreationDate, and OwnerID (linking to the User entity). Projects can have multiple Datasets associated with them.
```mermaid
erDiagram
Project {
string ProjectID PK
string ProjectName
string Description
string LocationID FK
datetime CreationDate
string OwnerID FK
string Status
datetime StartDate
}
```
## Location
The Location entity represents the geographical context of the data being mined. Locations can be linked to multiple Projects.
```mermaid
erDiagram
Location {
string LocationID PK
string Name
string Description
string Coordinates
string Region
string Country
}
```
## Currency
The Currency entity defines the monetary units used in the system. Currency is used when dealing with any monetary values in Projects or Results.
```mermaid
erDiagram
Currency {
string CurrencyID PK
string Code
string Name
string Symbol
datetime CreatedAt
datetime UpdatedAt
boolean IsActive
}
```
## Unit
The Unit entity defines measurement units used in the system. Units are essential for standardizing data representation across mining projects, financial models, and results.
```mermaid
erDiagram
Unit {
string UnitID PK
string Name
string Symbol
string Description
}
```
## Mining Technology
The Mining Technology entity encompasses the various tools and techniques used in data mining projects. A Mining Project typically utilizes one specific Mining Technology.
```mermaid
erDiagram
MiningTechnology {
string TechnologyID PK
string Name
string Description
}
```
## Product Model
The Product Model entity captures the details of products extracted or processed in mining projects. It includes attributes such as ProductID, Name, Type and UnitID (linking to the Unit entity). A Product Model can have multiple Quality Metrics associated with it.
```mermaid
erDiagram
ProductModel {
string ProductID PK
string Name
string Type
string UnitID FK
}
ProductModel ||--o{ QualityMetric : has
Unit ||--o{ ProductModel : used_in
Unit ||--o{ QualityMetric : used_in
```
### Quality Metrics
Quality Metrics define the standards and specifications for products derived from mining projects. These metrics ensure that products meet industry and regulatory requirements.
For example, Quality Metrics for a mineral product may include:
- Purity Level
- Moisture Content
- Impurity Levels
- Grain Size Distribution
- Chemical Composition
- Physical Properties
These metrics are essential for assessing the value and usability of the products in various applications and markets.
```mermaid
erDiagram
QualityMetric {
string MetricID PK
string Name
string Description
float MinValue
float MaxValue
string UnitID FK
string AssociatedProductID FK
}
ProductModel ||--o{ QualityMetric : has
Unit ||--o{ QualityMetric : used_in
```
## Financial Model
The Financial Model entity captures the economic aspects of mining projects. The Financial Model includes various sub-models and components that detail costs, revenues, investments, and risks associated with mining projects. Financial Models are the basis for Scenario analyses within Projects.
```mermaid
erDiagram
FinancialModel {
string ModelID PK
string Name
string Description
string AssociatedProjectID FK
}
```
### Cost Model
The Cost Model details the expenses associated with mining projects, including capital expenditures (CAPEX) and operational expenditures (OPEX).
- CAPEX: Initial costs for equipment, infrastructure, and setup.
- OPEX: Ongoing costs for labor, maintenance, and operations.
```mermaid
erDiagram
CostModel {
string ModelID PK
string Name
string Description
string AssociatedProjectID FK
}
CAPEX {
string CAPEXID PK
float EquipmentCost
float InfrastructureCost
float SetupCost
float ContingencyCost
float TotalCAPEX
string CurrencyID FK
}
OPEX {
string OPEXID PK
float LaborCost
float MaterialsCost
float EquipmentRentalCost
float UtilitiesCost
float MaintenanceCost
float TotalOPEX
string CurrencyID FK
}
FinancialModel ||--o{ CostModel : includes
CostModel ||--|| CAPEX : has
CostModel ||--|| OPEX : has
Currency ||--o{ CAPEX : used_in
Currency ||--o{ OPEX : used_in
```
#### CAPEX
For a calculation of Capital Expenditures (CAPEX), the following attributes are included:
- EquipmentCost
- InfrastructureCost
- SetupCost
- ContingencyCost
- TotalCAPEX
- CurrencyID (Foreign Key to Currency)
#### OPEX
For a calculation of Operational Expenditures (OPEX), the following attributes are included:
- LaborCost
- MaterialsCost
- EquipmentRentalCost
- UtilitiesCost
- MaintenanceCost
- TotalOPEX
- CurrencyID (Foreign Key to Currency)
### Revenue Model
The Revenue Model outlines the income generated from mining projects. It includes various revenue streams and product sales.
```mermaid
erDiagram
RevenueModel {
string ModelID PK
string Name
string Description
string AssociatedProjectID FK
}
FinancialModel ||--o{ RevenueModel : includes
RevenueStream {
string RevenueStreamID PK
string Name
string Description
float Amount
string CurrencyID FK
string AssociatedRevenueModelID FK
string Frequency
datetime StartDate
datetime EndDate
boolean IsRecurring
}
FinancialModel ||--o{ RevenueModel : includes
RevenueModel ||--o{ RevenueStream : has
Currency ||--o{ RevenueStream : used_in
```
#### Revenue Streams
Revenue streams can include product sales, service fees, and other income sources related to the mining project.
```mermaid
erDiagram
RevenueStream {
string RevenueStreamID PK
string Name
string Description
float Amount
string CurrencyID FK
string AssociatedRevenueModelID FK
string Frequency
datetime StartDate
datetime EndDate
boolean IsRecurring
}
```
#### Product Sales
Mining product sales represent the primary revenue stream for most mining projects. Product sales revenue is calculated based on the quantity of product sold, price per unit, and any applicable adjustments for quality or other factors like penalties for impurities. Also see [Price Calculation](../../specifications/price_calculation.md) for more details.
```mermaid
erDiagram
ProductSale {
string SaleID PK
string ProductID FK
float QuantitySold
float PricePerUnit
float TotalRevenue
string CurrencyID FK
datetime SaleDate
}
ProductModel ||--o{ ProductSale : has
Currency ||--o{ ProductSale : used_in
```
### Investment Model
The Investment Model focuses on the funding and financial backing of mining projects. It includes details about investors, investment amounts, and dates.
```mermaid
erDiagram
InvestmentModel {
string InvestmentID PK
string InvestorID FK
string ProjectID FK
float Amount
datetime InvestmentDate
}
FinancialModel ||--o{ InvestmentModel : includes
```
### Economic Model
The Economic Model assesses the overall economic viability and impact of mining projects. Economic indicators such as Net Present Value (NPV), Internal Rate of Return (IRR), and Payback Period are calculated within this model.
```mermaid
erDiagram
EconomicModel {
string ModelID PK
string Name
string Description
string AssociatedProjectID FK
}
FinancialModel ||--o{ EconomicModel : includes
```
#### Discounted Cash Flow Metrics
CalMiner standardises the computation of NPV, IRR, and Payback Period through the shared helpers in `services/financial.py`. The detailed algorithms, assumptions (compounding frequency, period anchoring, residual handling), and regression coverage are documented in [Financial Metrics Specification](../../specifications/financial_metrics.md). Scenario evaluation services and downstream analytics should rely on these helpers to ensure consistency across API, UI, and reporting features.
#### Monte Carlo Simulation
Stochastic profitability analysis builds on the deterministic helpers by sampling cash-flow distributions defined per scenario. The configuration contracts, supported distributions, and result schema are described in [Monte Carlo Simulation Specification](../../specifications/monte_carlo_simulation.md). Scenario evaluation flows should call `services/simulation.py` to generate iteration summaries and percentile data for reporting and visualisation features.
### Risk Model
The Risk Model identifies and evaluates potential risks associated with mining projects. It includes risk factors, their probabilities, and potential impacts on project outcomes.
```mermaid
erDiagram
RiskModel {
string ModelID PK
string Name
string Description
string AssociatedProjectID FK
}
FinancialModel ||--o{ RiskModel : includes
```
## Parameter
Parameters are configurable inputs used within Financial Models to simulate different economic conditions and scenarios for mining projects. Each Parameter has a name, type, default value, and is linked to a specific Technology.
```mermaid
erDiagram
Parameter {
string ParameterID PK
string Name
string Type
string DefaultValue
string Description
string AssociatedTechnologyID FK
}
```
## Scenario
A Scenario represents a specific configuration or analysis case within a Project. Scenarios utilize various Financial Models and Parameters to simulate different outcomes for mining projects.
```mermaid
erDiagram
Scenario {
string ScenarioID PK
string Name
string Description
string AssociatedProjectID FK
}
```
Cross-links between these documents mirror the previous inline references. Update any external links to point at the new files during your next documentation touchpoint.

View File

@@ -0,0 +1,369 @@
# SQLAlchemy Models
This reference describes the primary SQLAlchemy ORM models that underpin Calminer. It mirrors the original hierarchy from `02_data_model.md`, extracted so each domain area can evolve independently. See the [data model overview](../02_data_model.md) for context and navigation.
## User Management
### User
Represents authenticated platform users with optional elevated privileges.
**Table:** `users`
| Attribute | Type | Description |
| ------------- | ------------ | ------------------------- |
| id | Integer (PK) | Primary key |
| email | String(255) | Unique email address |
| username | String(128) | Unique username |
| password_hash | String(255) | Argon2 hashed password |
| is_active | Boolean | Account activation status |
| is_superuser | Boolean | Superuser privileges |
| last_login_at | DateTime | Last login timestamp |
| created_at | DateTime | Creation timestamp |
| updated_at | DateTime | Last update timestamp |
**Relationships:**
- `role_assignments`: Many-to-many with Role via UserRole
### Role
Defines user roles for role-based access control (RBAC).
**Table:** `roles`
| Attribute | Type | Description |
| ------------ | ------------ | --------------------- |
| id | Integer (PK) | Primary key |
| name | String(64) | Unique role name |
| display_name | String(128) | Human-readable name |
| description | Text | Role description |
| created_at | DateTime | Creation timestamp |
| updated_at | DateTime | Last update timestamp |
**Relationships:**
- `assignments`: One-to-many with UserRole
- `users`: Many-to-many with User (viewonly)
### UserRole
Association between users and roles with assignment metadata.
**Table:** `user_roles`
| Attribute | Type | Description |
| ---------- | ----------------------- | -------------------- |
| user_id | Integer (FK → users.id) | User foreign key |
| role_id | Integer (FK → roles.id) | Role foreign key |
| granted_at | DateTime | Assignment timestamp |
| granted_by | Integer (FK → users.id) | Granting user |
**Relationships:**
- `user`: Many-to-one with User
- `role`: Many-to-one with Role
- `granted_by_user`: Many-to-one with User
## Project Management
### Project
Top-level mining project grouping multiple scenarios.
**Table:** `projects`
| Attribute | Type | Description |
| ------------------- | ---------------------------------- | --------------------- |
| id | Integer (PK) | Primary key |
| name | String(255) | Unique project name |
| location | String(255) | Project location |
| operation_type | MiningOperationType | Mining operation type |
| description | Text | Project description |
| pricing_settings_id | Integer (FK → pricing_settings.id) | Pricing settings |
| created_at | DateTime | Creation timestamp |
| updated_at | DateTime | Last update timestamp |
**Relationships:**
- `scenarios`: One-to-many with Scenario
- `pricing_settings`: Many-to-one with PricingSettings
### Scenario
A specific configuration of assumptions for a project.
**Table:** `scenarios`
| Attribute | Type | Description |
| ---------------- | -------------------------- | ------------------------- |
| id | Integer (PK) | Primary key |
| project_id | Integer (FK → projects.id) | Project foreign key |
| name | String(255) | Scenario name |
| description | Text | Scenario description |
| status | ScenarioStatus | Scenario lifecycle status |
| start_date | Date | Scenario start date |
| end_date | Date | Scenario end date |
| discount_rate | Numeric(5,2) | Discount rate percentage |
| currency | String(3) | ISO currency code |
| primary_resource | ResourceType | Primary resource type |
| created_at | DateTime | Creation timestamp |
| updated_at | DateTime | Last update timestamp |
**Relationships:**
- `project`: Many-to-one with Project
- `financial_inputs`: One-to-many with FinancialInput
- `simulation_parameters`: One-to-many with SimulationParameter
### Projects → Scenarios → Profitability Calculations
Calminer organises feasibility data in a nested hierarchy. A project defines the overarching mining context and exposes a one-to-many `scenarios` collection. Each scenario captures a self-contained assumption set and anchors derived artefacts such as financial inputs, simulation parameters, and profitability snapshots. Profitability calculations execute at the scenario scope; when triggered, the workflow in `services/calculations.py` persists a `ScenarioProfitability` record and can optionally roll results up to project level by creating a `ProjectProfitability` snapshot. Consumers typically surface the most recent metrics via the `latest_profitability` helpers on both ORM models.
| Layer | ORM models | Pydantic schema(s) | Key relationships |
| -------------------------- | ----------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
| Project | `models.project.Project` | `schemas.project.ProjectRead` | `Project.scenarios`, `Project.profitability_snapshots`, `Project.latest_profitability` |
| Scenario | `models.scenario.Scenario` | `schemas.scenario.ScenarioRead` | `Scenario.project`, `Scenario.profitability_snapshots`, `Scenario.latest_profitability` |
| Profitability calculations | `models.profitability_snapshot.ProjectProfitability`, `models.profitability_snapshot.ScenarioProfitability` | `schemas.calculations.ProfitabilityCalculationRequest`, `schemas.calculations.ProfitabilityCalculationResult` | Persisted via `services.calculations.calculate_profitability`; aggregates scenario metrics into project snapshots |
Detailed CRUD endpoint behaviour for projects and scenarios is documented in `calminer-docs/api/README.md`.
### FinancialInput
Line-item financial assumption attached to a scenario.
**Table:** `financial_inputs`
| Attribute | Type | Description |
| -------------- | --------------------------- | -------------------------- |
| id | Integer (PK) | Primary key |
| scenario_id | Integer (FK → scenarios.id) | Scenario foreign key |
| name | String(255) | Input name |
| category | FinancialCategory | Financial category |
| cost_bucket | CostBucket | Cost bucket classification |
| amount | Numeric(18,2) | Monetary amount |
| currency | String(3) | ISO currency code |
| effective_date | Date | Effective date |
| notes | Text | Additional notes |
| created_at | DateTime | Creation timestamp |
| updated_at | DateTime | Last update timestamp |
**Relationships:**
- `scenario`: Many-to-one with Scenario
## Project and Scenario Snapshots
### ProjectCapexSnapshot
Project-level snapshot capturing aggregated initial capital expenditure metrics.
**Table:** `project_capex_snapshots`
| Attribute | Type | Description |
| ---------------------- | --------------------------------- | ------------------------------------------------------- |
| id | Integer (PK) | Primary key |
| project_id | Integer (FK → projects.id) | Associated project |
| created_by_id | Integer (FK → users.id, nullable) | User that triggered persistence |
| calculation_source | String(64), nullable | Originating workflow identifier (UI, API, etc.) |
| calculated_at | DateTime | Timestamp the calculation completed |
| currency_code | String(3), nullable | Currency for totals |
| total_capex | Numeric(18,2), nullable | Aggregated capex before contingency |
| contingency_pct | Numeric(12,6), nullable | Applied contingency percentage |
| contingency_amount | Numeric(18,2), nullable | Monetary contingency amount |
| total_with_contingency | Numeric(18,2), nullable | Capex total after contingency |
| component_count | Integer, nullable | Number of normalized components captured |
| payload | JSON, nullable | Serialized component breakdown and calculation metadata |
| created_at | DateTime | Record creation timestamp |
| updated_at | DateTime | Last update timestamp |
**Relationships:**
- `project`: Many-to-one with Project
- `created_by`: Many-to-one with User (nullable)
### ScenarioCapexSnapshot
Scenario-level snapshot storing detailed capex results.
**Table:** `scenario_capex_snapshots`
| Attribute | Type | Description |
| ---------------------- | --------------------------------- | ------------------------------------------------------- |
| id | Integer (PK) | Primary key |
| scenario_id | Integer (FK → scenarios.id) | Associated scenario |
| created_by_id | Integer (FK → users.id, nullable) | User that triggered persistence |
| calculation_source | String(64), nullable | Originating workflow identifier |
| calculated_at | DateTime | Timestamp the calculation completed |
| currency_code | String(3), nullable | Currency for totals |
| total_capex | Numeric(18,2), nullable | Aggregated capex before contingency |
| contingency_pct | Numeric(12,6), nullable | Applied contingency percentage |
| contingency_amount | Numeric(18,2), nullable | Monetary contingency amount |
| total_with_contingency | Numeric(18,2), nullable | Capex total after contingency |
| component_count | Integer, nullable | Number of normalized components captured |
| payload | JSON, nullable | Serialized component breakdown and calculation metadata |
| created_at | DateTime | Record creation timestamp |
| updated_at | DateTime | Last update timestamp |
**Relationships:**
- `scenario`: Many-to-one with Scenario
- `created_by`: Many-to-one with User (nullable)
### ProjectOpexSnapshot
Project-level snapshot persisting recurring opex metrics.
**Table:** `project_opex_snapshots`
| Attribute | Type | Description |
| ------------------------ | --------------------------------- | ------------------------------------------------------- |
| id | Integer (PK) | Primary key |
| project_id | Integer (FK → projects.id) | Associated project |
| created_by_id | Integer (FK → users.id, nullable) | User that triggered persistence |
| calculation_source | String(64), nullable | Originating workflow identifier |
| calculated_at | DateTime | Timestamp the calculation completed |
| currency_code | String(3), nullable | Currency for totals |
| overall_annual | Numeric(18,2), nullable | Total annual opex |
| escalated_total | Numeric(18,2), nullable | Escalated cost across the evaluation horizon |
| annual_average | Numeric(18,2), nullable | Average annual cost over the horizon |
| evaluation_horizon_years | Integer, nullable | Number of years included in the timeline |
| escalation_pct | Numeric(12,6), nullable | Escalation percentage applied |
| apply_escalation | Boolean | Flag indicating whether escalation was applied |
| component_count | Integer, nullable | Number of normalized components captured |
| payload | JSON, nullable | Serialized component breakdown and calculation metadata |
| created_at | DateTime | Record creation timestamp |
| updated_at | DateTime | Last update timestamp |
**Relationships:**
- `project`: Many-to-one with Project
- `created_by`: Many-to-one with User (nullable)
### ScenarioOpexSnapshot
Scenario-level snapshot persisting recurring opex metrics.
**Table:** `scenario_opex_snapshots`
| Attribute | Type | Description |
| ------------------------ | --------------------------------- | ------------------------------------------------------- |
| id | Integer (PK) | Primary key |
| scenario_id | Integer (FK → scenarios.id) | Associated scenario |
| created_by_id | Integer (FK → users.id, nullable) | User that triggered persistence |
| calculation_source | String(64), nullable | Originating workflow identifier |
| calculated_at | DateTime | Timestamp the calculation completed |
| currency_code | String(3), nullable | Currency for totals |
| overall_annual | Numeric(18,2), nullable | Total annual opex |
| escalated_total | Numeric(18,2), nullable | Escalated cost across the evaluation horizon |
| annual_average | Numeric(18,2), nullable | Average annual cost over the horizon |
| evaluation_horizon_years | Integer, nullable | Number of years included in the timeline |
| escalation_pct | Numeric(12,6), nullable | Escalation percentage applied |
| apply_escalation | Boolean | Flag indicating whether escalation was applied |
| component_count | Integer, nullable | Number of normalized components captured |
| payload | JSON, nullable | Serialized component breakdown and calculation metadata |
| created_at | DateTime | Record creation timestamp |
| updated_at | DateTime | Last update timestamp |
**Relationships:**
- `scenario`: Many-to-one with Scenario
- `created_by`: Many-to-one with User (nullable)
### SimulationParameter
Probability distribution settings for scenario simulations.
**Table:** `simulation_parameters`
| Attribute | Type | Description |
| ------------------ | --------------------------- | ------------------------ |
| id | Integer (PK) | Primary key |
| scenario_id | Integer (FK → scenarios.id) | Scenario foreign key |
| name | String(255) | Parameter name |
| distribution | DistributionType | Distribution type |
| variable | StochasticVariable | Stochastic variable type |
| resource_type | ResourceType | Resource type |
| mean_value | Numeric(18,4) | Mean value |
| standard_deviation | Numeric(18,4) | Standard deviation |
| minimum_value | Numeric(18,4) | Minimum value |
| maximum_value | Numeric(18,4) | Maximum value |
| unit | String(32) | Unit of measurement |
| configuration | JSON | Additional configuration |
| created_at | DateTime | Creation timestamp |
| updated_at | DateTime | Last update timestamp |
**Relationships:**
- `scenario`: Many-to-one with Scenario
## Pricing Configuration
### PricingSettings
Persisted pricing defaults applied to scenario evaluations.
**Table:** `pricing_settings`
| Attribute | Type | Description |
| ------------------------ | ------------- | ----------------------------- |
| id | Integer (PK) | Primary key |
| name | String(128) | Unique settings name |
| slug | String(64) | Unique slug identifier |
| description | Text | Settings description |
| default_currency | String(3) | Default ISO currency code |
| default_payable_pct | Numeric(5,2) | Default payable percentage |
| moisture_threshold_pct | Numeric(5,2) | Moisture threshold percentage |
| moisture_penalty_per_pct | Numeric(14,4) | Moisture penalty per percent |
| metadata_payload | JSON | Additional metadata |
| created_at | DateTime | Creation timestamp |
| updated_at | DateTime | Last update timestamp |
**Relationships:**
- `metal_overrides`: One-to-many with PricingMetalSettings
- `impurity_overrides`: One-to-many with PricingImpuritySettings
- `projects`: One-to-many with Project
### PricingMetalSettings
Contract-specific overrides for a particular metal.
**Table:** `pricing_metal_settings`
| Attribute | Type | Description |
| ------------------------ | ---------------------------------- | ---------------------------- |
| id | Integer (PK) | Primary key |
| pricing_settings_id | Integer (FK → pricing_settings.id) | Pricing settings foreign key |
| metal_code | String(32) | Metal code |
| payable_pct | Numeric(5,2) | Payable percentage |
| moisture_threshold_pct | Numeric(5,2) | Moisture threshold |
| moisture_penalty_per_pct | Numeric(14,4) | Moisture penalty |
| data | JSON | Additional data |
| created_at | DateTime | Creation timestamp |
| updated_at | DateTime | Last update timestamp |
**Relationships:**
- `pricing_settings`: Many-to-one with PricingSettings
### PricingImpuritySettings
Impurity penalty thresholds associated with pricing settings.
**Table:** `pricing_impurity_settings`
| Attribute | Type | Description |
| ------------------- | ---------------------------------- | ---------------------------- |
| id | Integer (PK) | Primary key |
| pricing_settings_id | Integer (FK → pricing_settings.id) | Pricing settings foreign key |
| impurity_code | String(32) | Impurity code |
| threshold_ppm | Numeric(14,4) | Threshold in ppm |
| penalty_per_ppm | Numeric(14,4) | Penalty per ppm |
| notes | Text | Additional notes |
| created_at | DateTime | Creation timestamp |
| updated_at | DateTime | Last update timestamp |
**Relationships:**
- `pricing_settings`: Many-to-one with PricingSettings

View File

@@ -0,0 +1,65 @@
# Navigation Metadata
This page details the navigation metadata tables that drive the Calminer sidebar and top-level menus. It was split from `02_data_model.md` to isolate runtime navigation behaviour from the broader ORM catalogue.
## NavigationGroup
Represents a logical grouping of navigation links shown in the UI sidebar or header.
**Table:** `navigation_groups`
| Attribute | Type | Description |
| ------------ | ------------ | ----------------------------------------------- |
| id | Integer (PK) | Primary key |
| slug | String(64) | Unique slug identifier |
| title | String(128) | Display title |
| description | Text | Optional descriptive text |
| match_prefix | String(255) | URL prefix used to auto-expand in UI |
| weighting | Integer | Ordering weight |
| icon_name | String(64) | Optional icon identifier |
| feature_flag | String(64) | Optional feature flag for conditional rendering |
| created_at | DateTime | Creation timestamp |
| updated_at | DateTime | Last update timestamp |
**Relationships:**
- `links`: One-to-many with NavigationLink
## NavigationLink
Individual navigation entry that may have an associated feature flag or role restriction.
**Table:** `navigation_links`
| Attribute | Type | Description |
| ------------- | ----------------------------------- | ---------------------------------------------------- |
| id | Integer (PK) | Primary key |
| group_id | Integer (FK → navigation_groups.id) | Parent navigation group |
| parent_id | Integer (FK → navigation_links.id) | Optional parent link (for nested menus) |
| slug | String(64) | Unique slug identifier |
| title | String(128) | Display title |
| description | Text | Optional descriptive text |
| href | String(255) | Static fallback URL |
| match_prefix | String(255) | URL prefix for expansion |
| feature_flag | String(64) | Optional feature flag for conditional rendering |
| required_role | String(64) | Optional role required to view the link |
| weighting | Integer | Ordering weight within the parent scope |
| icon_name | String(64) | Optional icon identifier |
| metadata | JSON | Additional configuration (e.g. legacy route aliases) |
| created_at | DateTime | Creation timestamp |
| updated_at | DateTime | Last update timestamp |
**Relationships:**
- `group`: Many-to-one with NavigationGroup
- `parent`: Many-to-one self-reference
- `children`: One-to-many self-reference
## Seeding and Runtime Consumption
- **Seed script:** `scripts/init_db.py` provisions baseline groups and links, including nested scenario calculators and role-gated admin entries.
- **Service layer:** `services/navigation.py` maps ORM entities into response DTOs, resolves contextual URLs for projects and scenarios, and applies feature flag as well as role filters.
- **API exposure:** `routes/navigation.py` serves the `/navigation/sidebar` endpoint, combining seeded data with runtime context to produce navigation trees consumed by the frontend.
- **Testing:** `tests/services/test_navigation_service.py` covers URL resolution and child filtering logic, while `tests/integration/test_navigation_sidebar_calculations.py` verifies scenario calculator entries in the API payload.
Refer to the [navigation service documentation](../../../../api/README.md#navigation) for endpoint-level behaviour.

View File

@@ -0,0 +1,83 @@
# Enumerations
This page catalogs the enumerations shared by Calminer ORM models and Pydantic schemas. The entries mirror the original content from `02_data_model.md` so enum updates can be maintained independently of the broader data model reference.
## MiningOperationType
Supported mining operation categories.
- `OPEN_PIT`: Open pit mining
- `UNDERGROUND`: Underground mining
- `IN_SITU_LEACH`: In-situ leaching
- `PLACER`: Placer mining
- `QUARRY`: Quarry operations
- `MOUNTAINTOP_REMOVAL`: Mountaintop removal
- `OTHER`: Other operations
## ScenarioStatus
Lifecycle states for project scenarios.
- `DRAFT`: Draft status
- `ACTIVE`: Active status
- `ARCHIVED`: Archived status
## FinancialCategory
Enumeration of cost and revenue classifications.
- `CAPITAL_EXPENDITURE`: Capital expenditures
- `OPERATING_EXPENDITURE`: Operating expenditures
- `REVENUE`: Revenue
- `CONTINGENCY`: Contingency
- `OTHER`: Other
## DistributionType
Supported stochastic distribution families for simulations.
- `NORMAL`: Normal distribution
- `TRIANGULAR`: Triangular distribution
- `UNIFORM`: Uniform distribution
- `LOGNORMAL`: Lognormal distribution
- `CUSTOM`: Custom distribution
## ResourceType
Primary consumables and resources used in mining operations.
- `DIESEL`: Diesel fuel
- `ELECTRICITY`: Electrical power
- `WATER`: Process water
- `EXPLOSIVES`: Blasting agents
- `REAGENTS`: Processing reagents
- `LABOR`: Direct labor
- `EQUIPMENT_HOURS`: Equipment operating hours
- `TAILINGS_CAPACITY`: Tailings storage
## CostBucket
Granular cost buckets aligned with project accounting.
- `CAPITAL_INITIAL`: Initial capital
- `CAPITAL_SUSTAINING`: Sustaining capital
- `OPERATING_FIXED`: Fixed operating costs
- `OPERATING_VARIABLE`: Variable operating costs
- `MAINTENANCE`: Maintenance costs
- `RECLAMATION`: Reclamation costs
- `ROYALTIES`: Royalties
- `GENERAL_ADMIN`: General and administrative
## StochasticVariable
Domain variables that typically require probabilistic modelling.
- `ORE_GRADE`: Ore grade variability
- `RECOVERY_RATE`: Metallurgical recovery
- `METAL_PRICE`: Commodity price
- `OPERATING_COST`: Operating cost per tonne
- `CAPITAL_COST`: Capital cost
- `DISCOUNT_RATE`: Discount rate
- `THROUGHPUT`: Plant throughput
Refer back to the [data model overview](../02_data_model.md) when choosing the appropriate enum for new features.

View File

@@ -0,0 +1,267 @@
# Pydantic Schemas
This page documents the Pydantic models used for request/response validation across the Calminer API surface. It was extracted from `02_data_model.md` so API-facing contracts can evolve separately from the ORM layer.
## Authentication Schemas (`schemas/auth.py`)
### RegistrationForm
Form model for user registration.
| Field | Type | Validation |
| ---------------- | ---- | ------------------------------- |
| username | str | 3-128 characters |
| email | str | Valid email format, 5-255 chars |
| password | str | 8-256 characters |
| confirm_password | str | Must match password |
### LoginForm
Form model for user login.
| Field | Type | Validation |
| -------- | ---- | ---------------- |
| username | str | 1-255 characters |
| password | str | 1-256 characters |
### PasswordResetRequestForm
Form model for password reset request.
| Field | Type | Validation |
| ----- | ---- | ------------------------------- |
| email | str | Valid email format, 5-255 chars |
### PasswordResetForm
Form model for password reset.
| Field | Type | Validation |
| ---------------- | ---- | ------------------- |
| token | str | Required |
| password | str | 8-256 characters |
| confirm_password | str | Must match password |
## Project Schemas (`schemas/project.py`)
### ProjectBase
Base schema for project data.
| Field | Type | Description |
| -------------- | ------------------- | --------------------- |
| name | str | Project name |
| location | str \| None | Project location |
| operation_type | MiningOperationType | Mining operation type |
| description | str \| None | Project description |
### ProjectCreate
Schema for creating projects (inherits ProjectBase).
### ProjectUpdate
Schema for updating projects.
| Field | Type | Description |
| -------------- | --------------------------- | --------------------- |
| name | str \| None | Project name |
| location | str \| None | Project location |
| operation_type | MiningOperationType \| None | Mining operation type |
| description | str \| None | Project description |
### ProjectRead
Schema for reading projects (inherits ProjectBase).
| Field | Type | Description |
| ---------- | -------- | ------------------ |
| id | int | Project ID |
| created_at | datetime | Creation timestamp |
| updated_at | datetime | Update timestamp |
## Scenario Schemas (`schemas/scenario.py`)
### ScenarioBase
Base schema for scenario data.
| Field | Type | Description |
| ---------------- | -------------------- | -------------------------------- |
| name | str | Scenario name |
| description | str \| None | Scenario description |
| status | ScenarioStatus | Scenario status (default: DRAFT) |
| start_date | date \| None | Start date |
| end_date | date \| None | End date |
| discount_rate | float \| None | Discount rate |
| currency | str \| None | ISO currency code |
| primary_resource | ResourceType \| None | Primary resource |
### ScenarioCreate
Schema for creating scenarios (inherits ScenarioBase).
### ScenarioUpdate
Schema for updating scenarios.
| Field | Type | Description |
| ---------------- | ---------------------- | -------------------- |
| name | str \| None | Scenario name |
| description | str \| None | Scenario description |
| status | ScenarioStatus \| None | Scenario status |
| start_date | date \| None | Start date |
| end_date | date \| None | End date |
| discount_rate | float \| None | Discount rate |
| currency | str \| None | ISO currency code |
| primary_resource | ResourceType \| None | Primary resource |
### ScenarioRead
Schema for reading scenarios (inherits ScenarioBase).
| Field | Type | Description |
| ---------- | -------- | ------------------ |
| id | int | Scenario ID |
| project_id | int | Project ID |
| created_at | datetime | Creation timestamp |
| updated_at | datetime | Update timestamp |
### ScenarioComparisonRequest
Schema for scenario comparison requests.
| Field | Type | Description |
| ------------ | --------- | --------------------------------------- |
| scenario_ids | list[int] | List of scenario IDs (minimum 2 unique) |
### ScenarioComparisonResponse
Schema for scenario comparison responses.
| Field | Type | Description |
| ---------- | ------------------ | --------------------- |
| project_id | int | Project ID |
| scenarios | list[ScenarioRead] | List of scenario data |
## Import Schemas (`schemas/imports.py`)
### ProjectImportRow
Schema for importing project data.
| Field | Type | Description |
| -------------- | ------------------- | ----------------------- |
| name | str | Project name (required) |
| location | str \| None | Project location |
| operation_type | MiningOperationType | Mining operation type |
| description | str \| None | Project description |
| created_at | datetime \| None | Creation timestamp |
| updated_at | datetime \| None | Update timestamp |
### ScenarioImportRow
Schema for importing scenario data.
| Field | Type | Description |
| ---------------- | -------------------- | -------------------------------- |
| project_name | str | Project name (required) |
| name | str | Scenario name (required) |
| status | ScenarioStatus | Scenario status (default: DRAFT) |
| start_date | date \| None | Start date |
| end_date | date \| None | End date |
| discount_rate | float \| None | Discount rate |
| currency | str \| None | ISO currency code |
| primary_resource | ResourceType \| None | Primary resource |
| description | str \| None | Scenario description |
| created_at | datetime \| None | Creation timestamp |
| updated_at | datetime \| None | Update timestamp |
## Export Schemas (`schemas/exports.py`)
### ExportFormat
Enumeration for export formats.
- `CSV`: CSV format
- `XLSX`: Excel format
### BaseExportRequest
Base schema for export requests.
| Field | Type | Description |
| ---------------- | ------------ | --------------------------------- |
| format | ExportFormat | Export format (default: CSV) |
| include_metadata | bool | Include metadata (default: False) |
### ProjectExportRequest
Schema for project export requests (inherits BaseExportRequest).
| Field | Type | Description |
| ------- | ---------------------------- | -------------- |
| filters | ProjectExportFilters \| None | Export filters |
### ScenarioExportRequest
Schema for scenario export requests (inherits BaseExportRequest).
| Field | Type | Description |
| ------- | ----------------------------- | -------------- |
| filters | ScenarioExportFilters \| None | Export filters |
### ExportTicket
Schema for export tickets.
| Field | Type | Description |
| -------- | -------------------------------- | ------------- |
| token | str | Export token |
| format | ExportFormat | Export format |
| resource | Literal["projects", "scenarios"] | Resource type |
### ExportResponse
Schema for export responses.
| Field | Type | Description |
| ------ | ------------ | ------------- |
| ticket | ExportTicket | Export ticket |
## Discrepancies Between Data Models and Pydantic Schemas
### Missing Pydantic Schemas
The following SQLAlchemy models do not have corresponding Pydantic schemas:
- `User` - Authentication handled through forms, not API schemas
- `Role` - Role management not exposed via API
- `UserRole` - User-role associations not exposed via API
- `FinancialInput` - Financial inputs managed through scenario context
- `SimulationParameter` - Simulation parameters managed through scenario context
- `PricingSettings` - Pricing settings managed through project context
- `PricingMetalSettings` - Pricing overrides managed through settings context
- `PricingImpuritySettings` - Pricing overrides managed through settings context
- `PerformanceMetric` - Metrics not exposed via API
- `ImportExportLog` - Audit logs not exposed via API
### Schema Differences
- **Project schemas** include `operation_type` as required enum, while the model allows null but defaults to `OTHER`
- **Scenario schemas** include currency normalization validation not present in the model validator
- **Import schemas** include extensive validation and coercion logic for CSV/Excel data parsing
- **Export schemas** reference filter classes (`ProjectExportFilters`, `ScenarioExportFilters`) not defined in this document
### Additional Validation
Pydantic schemas include additional validation beyond SQLAlchemy model constraints:
- Email format validation in auth forms
- Password confirmation matching
- Currency code normalization and validation
- Date range validation (end_date >= start_date)
- Required field validation for imports
- Enum value coercion with aliases for imports
Refer back to the [data model overview](../02_data_model.md) when aligning ORM entities with these schemas.

View File

@@ -0,0 +1,41 @@
# Monitoring and Auditing
This page documents the monitoring- and auditing-related tables that were previously embedded in `02_data_model.md`. Separating them keeps operational telemetry isolated from the core project/scenario data model reference.
## PerformanceMetric
Tracks API performance metrics used by the monitoring pipeline.
**Table:** `performance_metrics`
| Attribute | Type | Description |
| ---------------- | ------------ | --------------------- |
| id | Integer (PK) | Primary key |
| timestamp | DateTime | Metric timestamp |
| metric_name | String | Metric name |
| value | Float | Metric value |
| labels | String | JSON string of labels |
| endpoint | String | API endpoint |
| method | String | HTTP method |
| status_code | Integer | HTTP status code |
| duration_seconds | Float | Request duration |
## ImportExportLog
Audit log for import and export operations initiated by users.
**Table:** `import_export_logs`
| Attribute | Type | Description |
| ---------- | ----------------------- | ------------------------------------- |
| id | Integer (PK) | Primary key |
| action | String(32) | Action type (preview, commit, export) |
| dataset | String(32) | Dataset type (projects, scenarios) |
| status | String(16) | Operation status |
| filename | String(255) | File name |
| row_count | Integer | Number of rows |
| detail | Text | Additional details |
| user_id | Integer (FK → users.id) | User foreign key |
| created_at | DateTime | Creation timestamp |
These tables power telemetry dashboards and audit trails but are not exposed via public API endpoints.

View File

@@ -10,7 +10,7 @@ All sensitive data is encrypted at rest and in transit to prevent unauthorized a
Role-based access controls (RBAC) are implemented to restrict data access based on user roles and responsibilities.
Also see [Authentication and Authorization](../08_concepts.md#authentication-and-authorization) and the [Data Model](../08_concepts/02_data_model.md#user-roles) sections.
Also see [Authentication and Authorization](../08_concepts.md#authentication-and-authorization) and the [Data Model](../08_concepts/02_data_model/01_sqlalchemy_models.md#userrole) sections.
- Default administrative credentials are provided at deployment time through environment variables (`CALMINER_SEED_ADMIN_EMAIL`, `CALMINER_SEED_ADMIN_USERNAME`, `CALMINER_SEED_ADMIN_PASSWORD`, `CALMINER_SEED_ADMIN_ROLES`). These values are consumed by a shared bootstrap helper on application startup, ensuring mandatory roles and the administrator account exist before any user interaction.
- Operators can request a managed credential reset by setting `CALMINER_SEED_FORCE=true`. On the next startup the helper rotates the admin password and reapplies role assignments, so downstream environments must update stored secrets immediately after the reset.

View File

@@ -0,0 +1,39 @@
# Reporting and Analytics
The Calminer application provides robust reporting and analytics features designed to help users visualize and interpret mining project data effectively.
## Dashboard Generation
Users can create and customize dashboards that display key metrics and performance indicators related to mining projects. Dashboards support various visualization types, including charts, graphs, and tables, and allow users to drill down into specific data points for deeper analysis.
## Report Creation
The system enables users to create detailed reports that summarize project scenarios, financial analyses, and simulation results. Reports can be exported in multiple formats, including PDF and Excel, for easy sharing and presentation.
## Data Visualization Tools
The system provides tools for visualizing data through charts, graphs, and dashboards, making it easier to identify trends and insights. Users can interact with visualizations to explore data in more detail.
## Analytics Engine
The backend analytics engine processes large datasets and performs complex calculations, including Monte Carlo simulations, to provide users with probabilistic assessments of project outcomes.
### Real-time Analytics
The system supports real-time data processing and analytics, enabling users to access up-to-date information and respond quickly to changing conditions.
### Historical Analysis
The system maintains a history of key metrics and events, allowing for retrospective analysis and identification of long-term trends.
## Custom Reports
Users can create custom reports based on specific criteria, allowing for tailored analysis of project performance and resource utilization.
## User-Friendly Interface
The reporting and analytics features are accessible through an intuitive web interface, allowing users to navigate reports and dashboards seamlessly.
## Integration with Scenario Management
Reporting tools are tightly integrated with the scenario management system, enabling users to generate reports directly from specific project scenarios and compare results across different configurations.

53
requirements/FR-011.md Normal file
View File

@@ -0,0 +1,53 @@
# Functional Requirement FR-011: Basic Profitability Calculation
## Description
The system shall provide functionality to perform basic profitability calculations for ore mining products based on user-defined input parameters.
A detailed specification is available in [specifications/price_calculation.md](specifications/price_calculation.md)
## Rationale
Profitability calculations are essential for mining companies to assess the financial viability of their operations. By enabling users to input key parameters and receive profitability metrics, the system will support informed decision-making and strategic planning.
## Acceptance Criteria
1. **Input Parameters**: The system shall accept the following input parameters:
- Metal type (e.g., copper, gold, lithium)
- Ore tonnage processed
- Head grade (%)
- Recovery rate (%)
- Treatment charge (currency/tonne)
- Smelting charge (currency/tonne)
- Moisture content (%)
- Impurity content (ppm for relevant impurities)
- Moisture penalty factor (currency/%)
- Impurity penalty factor (currency/ppm)
- Premiums/credits (currency)
- FX rate (currency conversion rate)
2. **Calculation Logic**: The system shall implement the profitability calculation logic as specified in the detailed specification document, including:
- Calculation of metal content based on ore tonnage, head grade, and recovery rate.
- Computation of gross revenue using metal content and reference prices.
- Deduction of treatment and smelting charges.
- Application of moisture and impurity penalties.
- Adjustment for premiums/credits.
- Conversion of final revenue to scenario currency using the FX rate.
3. **Output Metrics**: The system shall provide the following output metrics:
- Gross revenue (in scenario currency)
- Total charges (treatment and smelting)
- Total penalties (moisture and impurities)
- Net revenue before premiums
- Final adjusted revenue (after premiums/credits)
## Dependencies
- Accurate and up-to-date reference prices for supported metals.
- User interface components for inputting parameters and displaying results.
- Data validation mechanisms to ensure input parameters are within acceptable ranges.
- Integration with existing scenario management and reporting modules for seamless user experience.
## Notes
- The system should allow for easy updates to reference prices and other key parameters as market conditions change.
- User training and documentation will be essential to ensure effective use of the profitability calculation features.
- Future enhancements may include sensitivity analysis and scenario comparisons based on profitability metrics.

50
requirements/FR-012.md Normal file
View File

@@ -0,0 +1,50 @@
# Functional Requirement FR-012: Monte Carlo simulation engine
## Description
The system shall provide a Monte Carlo simulation engine to model uncertainties in mining project parameters. This engine will allow users to define probability distributions for key input variables and run simulations to assess the impact of these uncertainties on project outcomes.
A detailed specification is available in [specifications/monte_carlo_simulation.md](specifications/monte_carlo_simulation.md)
## Rationale
Monte Carlo simulations are a powerful tool for risk analysis and decision-making in mining projects. By modeling the variability of input parameters, users can better understand potential outcomes, identify risks, and make informed decisions based on probabilistic results.
## Acceptance Criteria
1. **Input Parameter Definitions**: The system shall allow users to define probability distributions for key input parameters, including but not limited to:
- Ore grade
- Recovery rate
- Operating costs
- Metal prices
2. **Simulation Configuration**: The system shall provide options for users to configure the simulation, including:
- Number of simulation iterations
- Random seed for reproducibility
- Output metrics to be recorded
3. **Result Analysis**: The system shall offer tools for analyzing simulation results, including:
- Summary statistics (mean, median, percentiles)
- Visualization of result distributions (histograms, box plots)
- Comparison of different scenarios or input configurations
4. **Integration with Existing Workflows**: The Monte Carlo simulation engine shall be integrated with existing scenario management and reporting tools within the system, allowing users to easily incorporate simulation results into their decision-making processes.
5. **Documentation**: The system shall include comprehensive documentation on how to use the Monte Carlo simulation engine, including examples and best practices.
## Dependencies
- Access to historical data for key input parameters to inform probability distribution definitions.
- Computational resources to support the execution of large-scale simulations.
- Integration with data management systems to facilitate the use of external data sources.
- User interface components for defining input parameters, configuring simulations, and visualizing results.
## Notes
- The Monte Carlo simulation engine should be designed for scalability to accommodate large datasets and complex models.
- Users should be able to easily update probability distributions and other simulation parameters as new information becomes available.
- Collaboration features may be beneficial to allow multiple users to work on simulation scenarios simultaneously.
- Future enhancements may include sensitivity analysis and scenario optimization based on simulation results.

40
requirements/FR-013.md Normal file
View File

@@ -0,0 +1,40 @@
# Functional Requirement FR-013: Capital Expenditure (Capex) management and calculation tools for mining projects
## Description
The system shall provide tools to calculate and manage Capital Expenditures (Capex) associated with mining projects. This includes the ability to input, store, and analyze various capital cost components such as equipment purchases, infrastructure development, land acquisition, and other one-time expenses.
## Rationale
Accurate Capex calculations are essential for budgeting, financial planning, and profitability analysis in mining projects. By providing robust tools to manage these costs, the system will enable users to make informed decisions regarding project feasibility and capital allocation.
## Acceptance Criteria
1. **Capex Component Definition**: The system shall allow users to define and categorize various Capex components, including but not limited to:
- Equipment purchases
- Infrastructure development
- Land acquisition
- Other one-time expenses
2. **Input and Storage**: The system shall provide interfaces for users to input Capex data and store it in a structured format for easy retrieval and analysis.
3. **Capex Calculation**: The system shall include tools to calculate total Capex based on the defined components and their respective values. This includes the ability to apply formulas and algorithms to derive insights from the data.
4. **Reporting and Visualization**: The system shall offer reporting and visualization features to help users understand Capex trends, identify cost drivers, and make data-driven decisions.
5. **Integration**: The system shall support integration with other financial and operational systems to ensure seamless data flow and consistency across platforms.
## Dependencies
- Integration with existing financial modules for comprehensive cost analysis.
- User interface components for inputting and managing Capex data.
- Data validation mechanisms to ensure accuracy and consistency of Capex inputs.
- Reporting and visualization libraries to support analysis and presentation of Capex data.
## Notes
- The Capex management tools should be designed for scalability to accommodate large datasets and complex cost structures.
- Users should be able to easily update Capex components and their values as project conditions change.
- Collaboration features may be beneficial to allow multiple users to work on Capex scenarios simultaneously.
- Future enhancements may include predictive analytics and machine learning capabilities to improve Capex forecasting and optimization.

41
requirements/FR-014.md Normal file
View File

@@ -0,0 +1,41 @@
# Functional Requirement FR-014: Operational Expenditure (Opex) management and calculation tools for mining projects
## Description
The system shall provide tools to calculate and manage Operational Expenditures (Opex) associated with mining projects. This includes the ability to input, store, and analyze various operational cost components such as labor, materials, energy consumption, maintenance, and other recurring expenses.
## Rationale
Accurate Opex calculations are essential for budgeting, financial planning, and profitability analysis in mining projects. By providing robust tools to manage these costs, the system will enable users to make informed decisions regarding project feasibility and operational efficiency.
## Acceptance Criteria
1. **Opex Component Definition**: The system shall allow users to define and categorize various Opex components, including but not limited to:
- Labor costs
- Material costs
- Energy costs
- Maintenance costs
- Other recurring expenses
2. **Input and Storage**: The system shall provide interfaces for users to input Opex data and store it in a structured format for easy retrieval and analysis.
3. **Opex Calculation**: The system shall include tools to calculate total Opex based on the defined components and their respective values. This includes the ability to apply formulas and algorithms to derive insights from the data.
4. **Reporting and Visualization**: The system shall offer reporting and visualization features to help users understand Opex trends, identify cost drivers, and make data-driven decisions.
5. **Integration**: The system shall support integration with other financial and operational systems to ensure seamless data flow and consistency across platforms.
## Dependencies
- Integration with existing financial modules for comprehensive cost analysis.
- User interface components for inputting and managing Opex data.
- Data validation mechanisms to ensure accuracy and consistency of Opex inputs.
- Reporting and visualization libraries to support analysis and presentation of Opex data.
## Notes
- The Opex management tools should be designed for scalability to accommodate large datasets and complex cost structures.
- Users should be able to easily update Opex components and their values as project conditions change.
- Collaboration features may be beneficial to allow multiple users to work on Opex scenarios simultaneously.
- Future enhancements may include predictive analytics and machine learning capabilities to improve Opex forecasting and optimization.

View File

@@ -22,5 +22,9 @@ This document outlines the key functional requirements for the CalMiner project.
| FR-008 | Facilitate export of analysis results in various formats (PDF, Excel, etc.). | Medium | [FR-008](../requirements/FR-008.md) |
| FR-009 | Provide a user-friendly interface accessible via web browsers. | High | [FR-009](../requirements/FR-009.md) |
| FR-010 | Enable collaboration features for multiple users to work on scenarios simultaneously. | Medium | [FR-010](../requirements/FR-010.md) |
| FR-011 | Basic Profitability calculation for ore mining products based on input parameters. | High | [FR-011](../requirements/FR-011.md) |
| FR-012 | Monte Carlo simulation engine to model uncertainties in mining project parameters. | High | [FR-012](../requirements/FR-012.md) |
| FR-013 | Capital Expenditure (Capex) management and calculation tools for mining projects. | High | [FR-013](../requirements/FR-013.md) |
| FR-014 | Operational Expenditure (Opex) calculation tools for processing costs in mining. | High | [FR-014](../requirements/FR-014.md) |
Each functional requirement is detailed in its respective document, providing a comprehensive description, rationale, acceptance criteria, and dependencies. Please refer to the linked documents for more information on each requirement.

View File

@@ -1,3 +1,8 @@
# User Guide
<!-- TODO: Create a user guide for Calminer. This should include instructions on how to use the various features of the application, along with screenshots and examples where applicable. -->
CalMiner user-facing documentation is organized by feature area. Start with the guides below and expand the repository as new workflows ship.
## Available Guides
- [Capex Planner](initial_capex_planner.md) — capture upfront capital components, run calculations, and persist snapshots for projects and scenarios.
- [Opex Planner](opex_planner.md) — manage recurring operational costs, apply escalation/discount assumptions, and store calculation snapshots.

View File

@@ -0,0 +1,99 @@
# Data Import & Export Templates
This guide explains how to capture profitability, capex, and opex data for bulk upload or download using the standardized CSV and Excel templates. It builds on the field inventory documented in [data_import_export_field_inventory.md](data_import_export_field_inventory.md) and should be shared with anyone preparing data for the calculation engine.
## Template Package
The import/export toolkit contains two artifacts:
| File | Purpose |
| -------------------------- | ---------------------------------------------------------------------------------------------------- |
| `calminer_financials.csv` | Single-sheet CSV template for quick edits or integrations that generate structured rows dynamically. |
| `calminer_financials.xlsx` | Excel workbook with dedicated sheets, lookup lists, and validation rules for guided data capture. |
Always distribute the templates as a matched set so contributors can choose the format that best suits their workflow.
## CSV Workflow
### Structure Overview
The CSV template uses a `record_type` column to multiplex multiple logical tables inside one file. The exact column requirements, validation ranges, and record types are defined in the [Unified CSV Template Specification](data_import_export_field_inventory.md#unified-csv-template-specification). Key expectations:
- All rows include the scenario code (`scenario_code`), and optionally the project code, to keep capex and opex data aligned with profitability assumptions.
- Numeric cells must use `.` as the decimal separator and omit thousands separators, currency symbols, or formatting.
- Boolean values accept `true`/`false`, `yes`/`no`, or `1`/`0`; exporters should emit `true` or `false`.
### Authoring Steps
1. Populate at least one `profitability_input` row per scenario to establish the calculation context.
2. Add optional `profitability_impurity` rows where penalty modelling is required.
3. Capture capital costs via `capex_component` rows and configure options with a `capex_parameters` record.
4. Capture recurring costs via `processing_opex_component` rows. Add a `processing_opex_parameters` row to define escalation settings.
5. Validate the file against the rules listed in the specification prior to upload. Recommended tooling includes frictionless data checks or schema-enforced pipelines.
### Naming & Delivery
- Use the pattern `calminer_financials_<project-or-scenario-code>_<YYYYMMDD>.csv` when transferring files between teams.
- Provide a brief change log or summary per file so reviewers understand the scope of updates.
## Excel Workflow
### Workbook Layout
The Excel workbook contains structured sheets described in the [Excel Workbook Layout & Validation Rules](data_import_export_field_inventory.md#excel-workbook-layout--validation-rules) section. Highlights:
- The `Summary` sheet captures metadata (scenario, preparer, notes) that propagates into the downstream sheets via Excel Table references.
- Each data sheet (`Profitability_Input`, `Capex_Components`, `Processing_Opex`, etc.) is formatted as an Excel Table with frozen headers and data validation to enforce numeric ranges, lookup values, and boolean lists.
- The `Lookups` sheet holds named ranges for currencies, categories, and frequency values. Keep this sheet hidden to reduce accidental edits.
### Data Entry Checklist
- Confirm the `Summary` sheet lists every scenario included in the workbook before editing dependent sheets.
- Use the provided dropdowns for category, currency, frequency, and boolean fields to avoid invalid values.
- When copying data from other workbooks, paste values only to preserve validation rules.
- Turn on the totals row for component tables to quickly verify aggregate capex and opex amounts.
### Exporting From Excel
When exporting to CSV or ingesting into the API:
- Ensure the workbook is saved in `.xlsx` format; do not rename the sheets.
- If a CSV export is required, export each sheet separately and merge using the column headers defined in the CSV specification.
- Retain the original workbook as your source of truth; conversions to CSV should be treated as transient files for automation pipelines.
## Import Pipeline Expectations
The planned FastAPI import endpoints will perform the following checks:
- Validate record types, required columns, and numeric bounds according to the schemas described in the field inventory.
- Verify that referenced project/scenario codes exist and that foreign key relationships (components, parameters) target a single scenario within the file.
- Normalize casing for category slugs and currency codes to match the calculation services.
- Reject files that mix multiple projects unless intentionally enabled by configuration.
Until the endpoints are available, internal teams can use the specification as the contract for pre-validation scripts or interim ETL routines.
## Exporting Data
Export functionality will mirror the same structures:
- CSV exports will group rows by `record_type`, enabling direct round-tripping with minimal transformation.
- Excel exports will populate the same workbook sheets, preserving lookup lists and validation to support offline edits.
Document any local transformations or additional columns introduced during export so that downstream consumers can reconcile the data with the import templates.
## Change Management
- Increment the `import_version` value on the `Summary` sheet (and document the change here) whenever template-breaking updates occur.
- Update `data_import_export_field_inventory.md` first when adding or removing columns, then revise this guide to explain the user-facing workflow impact.
- Communicate updates to stakeholders and archive superseded templates to avoid drift.
## Stakeholder Review Checklist
Before finalising the templates for production use, circulate the following package to domain stakeholders (finance, engineering, data operations) and collect their feedback:
- `calminer_financials.csv` sample populated with representative profitability, capex, and opex data.
- `calminer_financials.xlsx` workbook showcasing all sheets, dropdowns, and validation rules.
- Links to documentation: this guide and the [field inventory](data_import_export_field_inventory.md).
- Summary of outstanding questions or assumptions (e.g., default currencies, additional categories).
Record the review session outcomes in the project tracker or meeting notes. Once sign-off is received, mark the stakeholder feedback step as complete in `.github/instructions/TODO.md` and proceed with implementation of import/export endpoints.

View File

@@ -0,0 +1,421 @@
# Data Import/Export Field Inventory
This inventory captures the current data fields involved in profitability, capex, and opex workflows. It consolidates inputs accepted by the calculation services, derived outputs that should be available for export, and persisted snapshot columns. The goal is to ground the upcoming CSV/Excel template design in authoritative field definitions.
## Profitability
### Calculation Inputs (`schemas/calculations.py::ProfitabilityCalculationRequest`)
| Field | Type | Constraints & Notes |
| -------------------------- | --------------------- | ---------------------------------------------------------- |
| `metal` | `str` | Required; trimmed lowercase; ore/metal identifier. |
| `ore_tonnage` | `PositiveFloat` | Required; tonnes processed (> 0). |
| `head_grade_pct` | `float` | Required; 0 < value ≤ 100. |
| `recovery_pct` | `float` | Required; 0 < value ≤ 100. |
| `payable_pct` | `float \| None` | Optional; 0 < value ≤ 100; overrides metadata default. |
| `reference_price` | `PositiveFloat` | Required; price per unit in base currency. |
| `treatment_charge` | `float` | ≥ 0. |
| `smelting_charge` | `float` | ≥ 0. |
| `moisture_pct` | `float` | ≥ 0 and ≤ 100. |
| `moisture_threshold_pct` | `float \| None` | Optional; ≥ 0 and ≤ 100. |
| `moisture_penalty_per_pct` | `float \| None` | Optional; penalty per excess moisture %. |
| `premiums` | `float` | Monetary premium adjustments (can be negative). |
| `fx_rate` | `PositiveFloat` | Multiplier to convert to scenario currency; defaults to 1. |
| `currency_code` | `str \| None` | Optional ISO 4217 code; uppercased. |
| `opex` | `float` | ≥ 0; feeds cost aggregation. |
| `sustaining_capex` | `float` | ≥ 0. |
| `capex` | `float` | ≥ 0. |
| `discount_rate` | `float \| None` | Optional; 0 ≤ value ≤ 100. |
| `periods` | `int` | Evaluation periods; 1 ≤ value ≤ 120. |
| `impurities` | `List[ImpurityInput]` | Optional; see table below. |
### Impurity Rows (`schemas/calculations.py::ImpurityInput`)
| Field | Type | Constraints & Notes |
| ----------- | --------------- | -------------------------------------------------- |
| `name` | `str` | Required; trimmed. |
| `value` | `float \| None` | Optional; ≥ 0; measured in ppm. |
| `threshold` | `float \| None` | Optional; ≥ 0. |
| `penalty` | `float \| None` | Optional; penalty factor applied beyond threshold. |
### Derived Outputs (selected)
| Structure | Fields |
| --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `PricingResult` (`services/pricing.py`) | `metal`, `ore_tonnage`, `head_grade_pct`, `recovery_pct`, `payable_metal_tonnes`, `reference_price`, `gross_revenue`, `moisture_penalty`, `impurity_penalty`, `treatment_smelt_charges`, `premiums`, `net_revenue`, `currency`. |
| `ProfitabilityCosts` | `opex_total`, `sustaining_capex_total`, `capex`. |
| `ProfitabilityMetrics` | `npv`, `irr`, `payback_period`, `margin`. |
| `CashFlowEntry` | `period`, `revenue`, `opex`, `sustaining_capex`, `net`. |
### Snapshot Columns (`models/profitability_snapshot.py`)
| Field | Type | Notes |
| ---------------------------- | ---------------- | ------------------------------------------------- |
| `project_id` / `scenario_id` | `int` | Foreign key; context. |
| `created_by_id` | `int \| None` | Author reference. |
| `calculation_source` | `str \| None` | Identifier for calculation run. |
| `calculated_at` | `datetime` | Server timestamp. |
| `currency_code` | `str \| None` | ISO code. |
| `npv` | `Numeric(18, 2)` | Net present value. |
| `irr_pct` | `Numeric(12, 6)` | Internal rate of return (%). |
| `payback_period_years` | `Numeric(12, 4)` | Years to payback. |
| `margin_pct` | `Numeric(12, 6)` | Profit margin %. |
| `revenue_total` | `Numeric(18, 2)` | Total revenue. |
| `opex_total` | `Numeric(18, 2)` | Linked opex cost. |
| `sustaining_capex_total` | `Numeric(18, 2)` | Sustaining capital spend. |
| `capex` | `Numeric(18, 2)` | Capex input. |
| `net_cash_flow_total` | `Numeric(18, 2)` | Sum of discounted cash flows. |
| `payload` | `JSON` | Serialized detail (cash flow arrays, parameters). |
| `created_at`, `updated_at` | `datetime` | Audit timestamps. |
## Capex
### Component Inputs (`schemas/calculations.py::CapexComponentInput`)
| Field | Type | Constraints & Notes |
| ------------ | ------------- | ----------------------------------------------- |
| `id` | `int \| None` | Optional persistent identifier; ≥ 1. |
| `name` | `str` | Required; trimmed. |
| `category` | `str` | Required; trimmed lowercase; category slug. |
| `amount` | `float` | Required; ≥ 0; nominal value. |
| `currency` | `str \| None` | Optional ISO 4217 code; uppercased. |
| `spend_year` | `int \| None` | Optional; 0 ≤ value ≤ 120; relative spend year. |
| `notes` | `str \| None` | Optional; max length 500. |
### Capex Parameters & Options
| Structure | Field | Type | Notes |
| ------------------------- | -------------------------- | --------------- | ------------------------------ |
| `CapexParameters` | `currency_code` | `str \| None` | Optional ISO code; uppercased. |
| | `contingency_pct` | `float \| None` | 0 ≤ value ≤ 100. |
| | `discount_rate_pct` | `float \| None` | 0 ≤ value ≤ 100. |
| | `evaluation_horizon_years` | `int \| None` | 1 ≤ value ≤ 100. |
| `CapexCalculationOptions` | `persist` | `bool` | Persist snapshot when true. |
### Derived Outputs (`CapexCalculationResult`)
| Segment | Fields |
| ------------------------------- | ---------------------------------------------------------------------------------------- |
| Totals (`CapexTotals`) | `overall`, `contingency_pct`, `contingency_amount`, `with_contingency`, `by_category[]`. |
| Timeline (`CapexTimelineEntry`) | `year`, `spend`, `cumulative`. |
| Echoed Inputs | Normalized `components`, resolved `parameters`, `options`, `currency`. |
### Snapshot Columns (`models/capex_snapshot.py`)
| Field | Type | Notes |
| ---------------------------- | ---------------- | -------------------------------------------- |
| `project_id` / `scenario_id` | `int` | Foreign key; context. |
| `created_by_id` | `int \| None` | Author reference. |
| `calculation_source` | `str \| None` | Origin tag. |
| `calculated_at` | `datetime` | Server timestamp. |
| `currency_code` | `str \| None` | ISO code. |
| `total_capex` | `Numeric(18, 2)` | Sum of component spend. |
| `contingency_pct` | `Numeric(12, 6)` | Applied contingency %. |
| `contingency_amount` | `Numeric(18, 2)` | Monetary contingency. |
| `total_with_contingency` | `Numeric(18, 2)` | All-in capex. |
| `component_count` | `int \| None` | Count of components. |
| `payload` | `JSON` | Serialized breakdown (components, timeline). |
| `created_at`, `updated_at` | `datetime` | Audit timestamps. |
## Opex
### Component Inputs (`schemas/calculations.py::OpexComponentInput`)
| Field | Type | Constraints & Notes |
| -------------- | ------------- | -------------------------------------------------- |
| `id` | `int \| None` | Optional persistent identifier; ≥ 1. |
| `name` | `str` | Required; trimmed. |
| `category` | `str` | Required; trimmed lowercase; category slug. |
| `unit_cost` | `float` | Required; ≥ 0; base currency. |
| `quantity` | `float` | Required; ≥ 0. |
| `frequency` | `str` | Required; trimmed lowercase; e.g. annual, monthly. |
| `currency` | `str \| None` | Optional ISO 4217 code; uppercased. |
| `period_start` | `int \| None` | Optional; ≥ 0; evaluation period index. |
| `period_end` | `int \| None` | Optional; ≥ 0; must be ≥ `period_start`. |
| `notes` | `str \| None` | Optional; max length 500. |
### Opex Parameters & Options
| Structure | Field | Type | Notes |
| ---------------- | -------------------------- | --------------- | ------------------------------ |
| `OpexParameters` | `currency_code` | `str \| None` | Optional ISO code; uppercased. |
| | `escalation_pct` | `float \| None` | 0 ≤ value ≤ 100. |
| | `discount_rate_pct` | `float \| None` | 0 ≤ value ≤ 100. |
| | `evaluation_horizon_years` | `int \| None` | 1 ≤ value ≤ 100. |
| | `apply_escalation` | `bool` | Toggle escalation usage. |
| `OpexOptions` | `persist` | `bool` | Persist snapshot when true. |
| | `snapshot_notes` | `str \| None` | Optional; max length 500. |
### Derived Outputs (`OpexCalculationResult`)
| Segment | Fields |
| ------------------------------ | ----------------------------------------------------------------------- |
| Totals (`OpexTotals`) | `overall_annual`, `escalated_total`, `escalation_pct`, `by_category[]`. |
| Timeline (`OpexTimelineEntry`) | `period`, `base_cost`, `escalated_cost`. |
| Metrics (`OpexMetrics`) | `annual_average`, `cost_per_ton`. |
| Echoed Inputs | Normalized `components`, `parameters`, `options`, resolved `currency`. |
### Snapshot Columns (`models/opex_snapshot.py`)
| Field | Type | Notes |
| ---------------------------- | ----------------- | ----------------------------------------------------- |
| `project_id` / `scenario_id` | `int` | Foreign key; context. |
| `created_by_id` | `int \| None` | Author reference. |
| `calculation_source` | `str \| None` | Origin tag. |
| `calculated_at` | `datetime` | Server timestamp. |
| `currency_code` | `str \| None` | ISO code. |
| `overall_annual` | `Numeric(18, 2)` | Annual recurring cost. |
| `escalated_total` | `Numeric(18, 2)` | Total cost over horizon with escalation. |
| `annual_average` | `Numeric(18, 2)` | Average annual spend. |
| `evaluation_horizon_years` | `Integer` | Horizon length. |
| `escalation_pct` | `Numeric(12, 6)` | Escalation %. |
| `apply_escalation` | `Boolean` | Flag used for calculation. |
| `component_count` | `Integer \| None` | Component count. |
| `payload` | `JSON` | Serialized breakdown (components, timeline, metrics). |
| `created_at`, `updated_at` | `datetime` | Audit timestamps. |
## Unified CSV Template Specification
### Overview
The CSV template consolidates profitability, capex, and opex data into a single file. Each row declares its purpose via a `record_type` flag so that parsers can route data to the appropriate service. The format is UTF-8 with a header row and uses commas as delimiters. Empty string cells are interpreted as `null`. Monetary values must not include currency symbols; decimals use a period.
### Shared Columns
| Column | Required | Applies To | Validation & Notes |
| --------------- | -------- | ---------------------------- | --------------------------------------------------------------------------------- |
| `record_type` | Yes | All rows | Lowercase slug describing the row (see record types below). |
| `project_code` | No | All rows | Optional external project identifier; trimmed; max length 64. |
| `scenario_code` | No | All rows | Optional external scenario identifier; trimmed; max length 64. |
| `sequence` | No | Component and impurity rows | Optional positive integer governing ordering in UI exports. |
| `notes` | No | Component and parameter rows | Free-form text ≤ 500 chars; trimmed; mirrors request `notes` fields when present. |
### Record Types
#### `profitability_input`
Single row per scenario capturing the fields required by `ProfitabilityCalculationRequest`.
| Column | Required | Validation Notes |
| -------------------------- | -------- | -------------------------------------- |
| `metal` | Yes | Lowercase slug; 132 chars. |
| `ore_tonnage` | Yes | Decimal > 0. |
| `head_grade_pct` | Yes | Decimal > 0 and ≤ 100. |
| `recovery_pct` | Yes | Decimal > 0 and ≤ 100. |
| `payable_pct` | No | Decimal > 0 and ≤ 100. |
| `reference_price` | Yes | Decimal > 0. |
| `treatment_charge` | No | Decimal ≥ 0. |
| `smelting_charge` | No | Decimal ≥ 0. |
| `moisture_pct` | No | Decimal ≥ 0 and ≤ 100. |
| `moisture_threshold_pct` | No | Decimal ≥ 0 and ≤ 100. |
| `moisture_penalty_per_pct` | No | Decimal; allow negative for credits. |
| `premiums` | No | Decimal; allow negative. |
| `fx_rate` | No | Decimal > 0; defaults to 1 when blank. |
| `currency_code` | No | ISO 4217 code; uppercase; length 3. |
| `opex` | No | Decimal ≥ 0. |
| `sustaining_capex` | No | Decimal ≥ 0. |
| `capex` | No | Decimal ≥ 0. |
| `discount_rate` | No | Decimal ≥ 0 and ≤ 100. |
| `periods` | No | Integer 1120. |
#### `profitability_impurity`
Multiple rows permitted per scenario; maps to `ImpurityInput`.
| Column | Required | Validation Notes |
| ----------- | -------- | ------------------------------------- |
| `name` | Yes | Trimmed string; 164 chars. |
| `value` | No | Decimal ≥ 0. |
| `threshold` | No | Decimal ≥ 0. |
| `penalty` | No | Decimal; can be negative for credits. |
#### `capex_component`
One row per component feeding `CapexComponentInput`.
| Column | Required | Validation Notes |
| -------------- | -------- | ---------------------------------------------------- |
| `component_id` | No | Integer ≥ 1; references existing record for updates. |
| `name` | Yes | Trimmed string; 1128 chars. |
| `category` | Yes | Lowercase slug; matches allowed UI categories. |
| `amount` | Yes | Decimal ≥ 0. |
| `currency` | No | ISO 4217 code; uppercase; length 3. |
| `spend_year` | No | Integer 0120. |
#### `capex_parameters`
At most one row per scenario; populates `CapexParameters` and `CapexCalculationOptions`.
| Column | Required | Validation Notes |
| -------------------------- | -------- | ------------------------------------------------ |
| `currency_code` | No | ISO 4217 code; uppercase. |
| `contingency_pct` | No | Decimal ≥ 0 and ≤ 100. |
| `discount_rate_pct` | No | Decimal ≥ 0 and ≤ 100. |
| `evaluation_horizon_years` | No | Integer 1100. |
| `persist_snapshot` | No | `true`/`false` case-insensitive; defaults false. |
#### `opex_component`
Maps to `OpexComponentInput`; multiple rows allowed.
| Column | Required | Validation Notes |
| -------------- | -------- | ------------------------------------------------------------------------- |
| `component_id` | No | Integer ≥ 1; references existing record for updates. |
| `name` | Yes | Trimmed string; 1128 chars. |
| `category` | Yes | Lowercase slug; matches enum used in UI (e.g. energy, labor). |
| `unit_cost` | Yes | Decimal ≥ 0. |
| `quantity` | Yes | Decimal ≥ 0. |
| `frequency` | Yes | Lowercase slug; allowed values: `annual`, `monthly`, `quarterly`, `once`. |
| `currency` | No | ISO 4217 code; uppercase; length 3. |
| `period_start` | No | Integer ≥ 0; must be ≤ `period_end` when provided. |
| `period_end` | No | Integer ≥ 0; defaults to `period_start` when blank. |
#### `opex_parameters`
At most one row per scenario; maps to `OpexParameters` and options.
| Column | Required | Validation Notes |
| -------------------------- | -------- | ------------------------------- |
| `currency_code` | No | ISO 4217 code; uppercase. |
| `escalation_pct` | No | Decimal ≥ 0 and ≤ 100. |
| `discount_rate_pct` | No | Decimal ≥ 0 and ≤ 100. |
| `evaluation_horizon_years` | No | Integer 1100. |
| `apply_escalation` | No | `true`/`false`; defaults true. |
| `persist_snapshot` | No | `true`/`false`; defaults false. |
| `snapshot_notes` | No | Free-form text ≤ 500 chars. |
### Validation Rules Summary
- Parsers must group rows by `project_code` + `scenario_code`; missing codes fall back to payload metadata supplied during import.
- `record_type` values outside the table must raise a validation error.
- Component identifiers (`component_id`) are optional for inserts but required to overwrite existing records.
- Decimal columns should accept up to two fractional places for currency-aligned fields (`amount`, `overall`, etc.) and up to six for percentage columns.
- Boolean columns accept `true`, `false`, `1`, `0`, `yes`, `no` (case-insensitive); exporters should emit `true`/`false`.
## Excel Workbook Layout & Validation Rules
### Workbook Structure
| Sheet Name | Purpose |
| -------------------------- | ------------------------------------------------------------------------------------- |
| `Summary` | Capture project/scenario metadata and import scope details. |
| `Profitability_Input` | Tabular input for `ProfitabilityCalculationRequest`. |
| `Profitability_Impurities` | Optional impurity rows linked to a scenario. |
| `Capex_Components` | Component-level spend records for the capex planner. |
| `Capex_Parameters` | Global capex parameters/options for a scenario. |
| `Opex` | Component-level recurring cost records for opex. |
| `Opex_Parameters` | Global opex parameters/options for a scenario. |
| `Lookups` | Controlled vocabulary lists consumed by data validation (categories, booleans, etc.). |
All sheets except `Lookups` start with a frozen header row and are formatted as Excel Tables (e.g., `tbl_profitability`). Tables enforce consistent column names and simplify import parsing.
### `Summary` Sheet
| Column | Required | Validation & Notes |
| ---------------- | -------- | --------------------------------------------------------------------------------- |
| `import_version` | Yes | Static text `v1`; used to detect template drift. |
| `project_code` | No | Matches shared `project_code`; max 64 chars; trimmed. |
| `scenario_code` | Yes | Identifier tying all sheets together; max 64 chars; duplicates allowed for batch. |
| `prepared_by` | No | Free-form text ≤ 128 chars. |
| `prepared_on` | No | Excel date; data validation restricts to dates ≥ `TODAY()-365`. |
| `notes` | No | Free-form text ≤ 500 chars; carry-over to import metadata. |
### `Profitability_Input`
Columns mirror the CSV specification; data validation rules apply per cell:
- Numeric fields (`ore_tonnage`, `reference_price`, etc.) use decimal validation with explicit min/max aligned to service constraints.
- Percentage fields (`head_grade_pct`, `discount_rate`) use decimals with bounds (e.g., 0100). Apply `Data Validation → Decimal` settings.
- `currency_code` validation references `Lookups!$A:$A` (ISO codes list).
- Table default rows include scenario code reference via structured formula: `=[@Scenario_Code]` autocompletes when set.
### `Profitability_Impurities`
| Column | Required | Validation & Notes |
| --------------- | -------- | ----------------------------------------------------- |
| `scenario_code` | Yes | Drop-down referencing `Summary!C:C`; ensures linkage. |
| `name` | Yes | Text ≤ 64 chars; duplicates allowed. |
| `value` | No | Decimal ≥ 0. |
| `threshold` | No | Decimal ≥ 0. |
| `penalty` | No | Decimal; allow negatives. |
### `Capex_Components`
| Column | Required | Validation & Notes |
| --------------- | -------- | ------------------------------------------------------ |
| `scenario_code` | Yes | Drop-down referencing `Summary!C:C`. |
| `component_id` | No | Whole number ≥ 1; optional when inserting new records. |
| `name` | Yes | Text ≤ 128 chars. |
| `category` | Yes | Drop-down referencing `Lookups!category_values`. |
| `amount` | Yes | Decimal ≥ 0; formatted as currency (no symbol). |
| `currency` | No | Drop-down referencing `Lookups!currency_codes`. |
| `spend_year` | No | Whole number 0120. |
| `notes` | No | Text ≤ 500 chars. |
### `Capex_Parameters`
Single-row table per scenario with structured references:
| Column | Required | Validation & Notes |
| -------------------------- | -------- | ------------------------------------------------------------ |
| `scenario_code` | Yes | Drop-down referencing `Summary!C:C`. |
| `currency_code` | No | Drop-down referencing `Lookups!currency_codes`. |
| `contingency_pct` | No | Decimal 0100 with two decimal places. |
| `discount_rate_pct` | No | Decimal 0100. |
| `evaluation_horizon_years` | No | Whole number 1100. |
| `persist_snapshot` | No | Drop-down referencing `Lookups!boolean_values` (True/False). |
| `notes` | No | Text ≤ 500 chars; maps to request options metadata. |
### `Opex`
| Column | Required | Validation & Notes |
| --------------- | -------- | -------------------------------------------------------------------------------------------------------------------------- |
| `scenario_code` | Yes | Drop-down referencing `Summary!C:C`. |
| `component_id` | No | Whole number ≥ 1; optional for inserts. |
| `name` | Yes | Text ≤ 128 chars. |
| `category` | Yes | Drop-down referencing `Lookups!opex_categories`. |
| `unit_cost` | Yes | Decimal ≥ 0. |
| `quantity` | Yes | Decimal ≥ 0. |
| `frequency` | Yes | Drop-down referencing `Lookups!frequency_values` (annual, monthly, quarterly, once). |
| `currency` | No | Drop-down referencing `Lookups!currency_codes`. |
| `period_start` | No | Whole number ≥ 0; additional rule ensures `period_end``period_start` via custom formula `=IF(ISBLANK(H2),TRUE,H2<=I2)`. |
| `period_end` | No | Whole number ≥ 0. |
| `notes` | No | Text ≤ 500 chars. |
### `Opex_Parameters`
| Column | Required | Validation & Notes |
| -------------------------- | -------- | ----------------------------------------------- |
| `scenario_code` | Yes | Drop-down referencing `Summary!C:C`. |
| `currency_code` | No | Drop-down referencing `Lookups!currency_codes`. |
| `escalation_pct` | No | Decimal 0100 with up to two decimals. |
| `discount_rate_pct` | No | Decimal 0100. |
| `evaluation_horizon_years` | No | Whole number 1100. |
| `apply_escalation` | No | Drop-down referencing `Lookups!boolean_values`. |
| `persist_snapshot` | No | Drop-down referencing `Lookups!boolean_values`. |
| `snapshot_notes` | No | Text ≤ 500 chars. |
### `Lookups` Sheet
Contains named ranges used by validation rules:
| Named Range | Column Contents |
| ------------------ | ---------------------------------------------- |
| `currency_codes` | ISO 4217 codes supported by the platform. |
| `category_values` | Allowed capex categories (e.g., engineering). |
| `opex_categories` | Allowed opex categories (e.g., energy, labor). |
| `frequency_values` | `annual`, `monthly`, `quarterly`, `once`. |
| `boolean_values` | `TRUE`, `FALSE`. |
The sheet is hidden by default to avoid accidental edits. Import logic should bundle the lookup dictionary alongside the workbook to verify user-supplied values.
### Additional Validation Guidance
- Protect header rows to prevent renaming; enable `Allow Users to Edit Ranges` for data sections only.
- Apply conditional formatting to highlight missing required fields (`ISBLANK`) and out-of-range values.
- Provide data validation error messages explaining expected ranges to reduce back-and-forth with users.
- Recommend enabling the Excel Table totals row for quick sanity checks (sum of amounts, counts of components).
## Next Use
The consolidated tables above provide the authoritative field inventory required to draft CSV column layouts and Excel worksheet structures. They also surface validation ranges and metadata that must be preserved during import/export.

View File

@@ -0,0 +1,59 @@
# Capex Planner
The Capex Planner helps project teams capture upfront capital requirements, categorize spend, and produce a shareable breakdown that feeds downstream profitability analysis. The feature implements the acceptance criteria described in [FR-013](../requirements/FR-013.md) and persists calculation snapshots for both projects and scenarios when context is provided.
## Access Paths
- **Workspace sidebar**: Navigate to _Workspace → Capex Planner_.
- **Scenario detail page**: Use the _Capex Planner_ button in the scenario header to open the planner pre-loaded with the selected project and scenario.
## Prerequisites
- Authenticated CalMiner account with at least _viewer_ permissions on the target project/scenario.
- Baseline pricing settings seeded through the database initializer (the GET route resolves pricing metadata for defaults).
## Planner Workflow
1. **Open the planner** via the sidebar or scenario action button. Query parameters (`project_id`, `scenario_id`) preload metadata for the selected context.
2. **Review defaults** in the form header: currency code, contingency, discount rate, and evaluation horizon pull from the scenario, project, or system defaults.
3. **Capture capex components** by adding rows to the components table. Each row records:
- Category (equipment, infrastructure, land, miscellaneous, etc.)
- Component name and optional internal identifier
- Amount and currency (defaults to scenario or project currency)
- Planned spend year and notes
4. **Adjust global parameters** in the sidebar panel for contingency percentage, discount rate, and evaluation horizon years.
5. **Run the calculation** with _Save & Calculate_. The POST request validates the payload, aggregates totals, applies contingency, and produces categorized and time-phased results.
6. **Review results** in the Capex Summary cards, category table, and timeline table. Optional chart containers are rendered for future visualization enhancements.
## Persistence Behaviour
- When a `project_id` or `scenario_id` is supplied, the POST handler stores a snapshot via `ProjectCapexSnapshot` and/or `ScenarioCapexSnapshot` before rendering results.
- Snapshots include aggregate totals, contingency metrics, component counts, and the normalized payload, enabling auditing and downstream financial modelling.
- JSON clients can set `options[persist]=0` to skip persistence in future iterations (HTML form defaults to persist enabled).
## API Reference
| Route | Method | Description |
| --------------------------------------------------- | ------ | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| `/calculations/capex` (`calculations.capex_form`) | GET | Renders the planner template with defaults for the provided project/scenario context. |
| `/calculations/capex` (`calculations.capex_submit`) | POST | Accepts form or JSON payload matching `CapexCalculationRequest`, returns HTML or JSON results, and persists snapshots when context is present. |
Key schemas and services:
- `schemas.calculations.CapexCalculationRequest`, `CapexComponentInput`, `CapexParameters`
- `services.calculations.calculate_initial_capex`
- `_persist_capex_snapshots` helper in `routes/calculations.py`
Refer to `tests/integration/test_capex_calculations.py` for example payloads and persistence assertions.
## Troubleshooting
- **Validation errors**: Field-level messages appear above the components table for row-specific issues and in the global error alert for general problems.
- **Currency mismatch**: The service enforces a single currency across all components; ensure component rows use the same ISO-4217 code or adjust the default currency.
- **Timeline gaps**: Rows without a `spend_year` are excluded from the timeline but still contribute to totals.
## Related Resources
- [Capex Planner template](../../calminer/templates/scenarios/capex.html)
- [Capex snapshot models](../../calminer/models/capex_snapshot.py)
- [FR-013 Requirement](../requirements/FR-013.md)

View File

@@ -0,0 +1,114 @@
# Opex Planner
The Opex Planner captures recurring operational costs, applies escalation/discount assumptions, and persists calculation snapshots for projects and scenarios. It satisfies the Operational Expenditure tooling defined in [FR-014](../requirements/FR-014.md).
## Access Paths
- **Workspace sidebar**: Navigate to _Workspace → Opex Planner_.
- **Scenario detail header**: Use the _Processing Opex Planner_ button to open the planner pre-loaded with the active project/scenario context.
## Prerequisites
- Authenticated CalMiner account with viewer or higher access to the target project or scenario.
- Baseline scenario/project metadata (currency, discount rate, primary resource) to seed form defaults when IDs are provided.
## Planner Workflow
1. **Open the planner** via the sidebar or scenario action. Query parameters (`project_id`, `scenario_id`) load project and scenario metadata plus the latest persisted snapshot, when available.
2. **Review defaults** in the global parameters panel. Currency, discount rate, evaluation horizon, and persist toggle derive from the scenario, project, or system defaults surfaced by `_prepare_opex_context` in `routes/calculations.py`.
3. **Capture opex components** in the components table. Each row records the details listed in the _Component Fields_ table below. Rows support dynamic add/remove interactions and inline validation messages for missing data.
4. **Adjust global parameters** such as escalation percentage, discount rate, evaluation horizon, and whether escalation applies to the timeline. Parameter defaults derive from the active scenario, project, or environment configuration.
5. **Add optional snapshot notes** that persist alongside stored results and appear in the snapshot history UI (planned) and API responses.
6. **Run the calculation** with _Save & Calculate_. The POST handler validates input via `OpexCalculationRequest`, calls `services.calculations.calculate_opex`, and repopulates the form with normalized data. When validation fails, the page returns a 422 status with component-level alerts.
7. **Review results** in the summary cards and detailed breakdowns: total annual opex, per-ton cost, category totals, escalated timeline table, and the normalized component listing. Alerts surface validation issues or component-level notices above the table.
### Component Fields
| Field | Description | Notes |
| -------------- | --------------------------------------------------------------------------- | ------------------------------------------------------ |
| `category` | Cost grouping (`labor`, `materials`, `energy`, `maintenance`, `other`). | Drives category totals and stacked charts. |
| `name` | Descriptive label for the component. | Displayed in breakdown tables and persisted snapshots. |
| `unit_cost` | Cost per unit in the selected currency. | Decimal, >= 0. |
| `quantity` | Units consumed per frequency period. | Decimal, >= 0. |
| `frequency` | Recurrence cadence (`daily`, `weekly`, `monthly`, `quarterly`, `annually`). | Controls timeline expansion scaling. |
| `currency` | ISO-4217 currency code. | Must match parameters currency when persisting. |
| `period_start` | First evaluation period (1-indexed) to include the component. | Optional; defaults to 1. |
| `period_end` | Final evaluation period to include the component. | Optional; defaults to evaluation horizon. |
| `notes` | Free-form text stored with the component. | Optional; hidden by default in HTML form. |
### Global Parameters
| Parameter | Purpose |
| -------------------------- | --------------------------------------------------------------------------------- |
| `currency_code` | Normalized currency for totals and timeline. |
| `escalation_pct` | Annual escalation applied to eligible components when `apply_escalation` is true. |
| `discount_rate_pct` | Discount rate surfaced for downstream profitability workflows. |
| `evaluation_horizon_years` | Number of years to expand the timeline. |
| `apply_escalation` | Boolean flag enabling escalation across the timeline. |
| `persist` (options) | Persists the calculation when project/scenario context is present. |
| `snapshot_notes` (options) | Optional metadata attached to stored snapshots. |
## Persistence Behaviour
- When `project_id` or `scenario_id` is supplied and `options[persist]` evaluates true (default for HTML form), snapshots are stored via `ProjectOpexSnapshot` and `ScenarioOpexSnapshot` repositories before rendering the response.
- Snapshot payloads capture normalized component entries, parameters, escalation settings, calculated totals, and optional notes, enabling historical comparison and downstream profitability inputs.
- JSON clients can disable persistence by sending `"options": {"persist": false}` or omit identifiers for ad hoc calculations.
## API Reference
| Route | Method | Description |
| ------------------------------------------------- | ------ | --------------------------------------------------------------------------------------------------------------------------------------------- |
| `/calculations/opex` (`calculations.opex_form`) | GET | Renders the planner template with defaults and any latest snapshot context for the supplied project/scenario IDs. |
| `/calculations/opex` (`calculations.opex_submit`) | POST | Accepts form or JSON payload matching `OpexCalculationRequest`, returns HTML or JSON results, and persists snapshots when context is present. |
Key schemas and services:
- `schemas.calculations.OpexComponentInput`, `OpexParameters`, `OpexCalculationRequest`, `OpexCalculationResult`
- `services.calculations.calculate_opex`
- `_prepare_opex_context`, `_persist_opex_snapshots`, and related helpers in `routes/calculations.py`
### Example JSON Request
```json
{
"components": [
{
"category": "energy",
"name": "Processing Plant Power",
"unit_cost": "480.00",
"quantity": "1.0",
"frequency": "monthly",
"currency": "USD",
"period_start": 1,
"period_end": 12
}
],
"parameters": {
"currency_code": "USD",
"escalation_pct": "3.0",
"discount_rate_pct": "8.0",
"evaluation_horizon_years": 10,
"apply_escalation": true
},
"options": {
"persist": true,
"snapshot_notes": "Baseline processing OPEX"
}
}
```
The response payload mirrors `OpexCalculationResult`, returning normalized components, aggregated totals, timeline series, and snapshot metadata when persistence is enabled.
## Troubleshooting
- **Validation errors**: The planner surfaces field-level issues above the component table. JSON responses include `errors` and `message` keys mirroring Pydantic validation output.
- **Currency mismatch**: All component rows must share the same currency. Adjust row currencies or the default currency in the parameters panel to resolve mismatches enforced by the service layer.
- **Timeline coverage**: Ensure `period_start` and `period_end` fall within the evaluation horizon. Rows outside the horizon are ignored in the timeline though they still influence totals.
## Related Resources
- [Opex planner template](../../calminer/templates/scenarios/opex.html)
- [Calculations route handlers](../../calminer/routes/calculations.py)
- [Opex schemas and results](../../calminer/schemas/calculations.py)
- [Service implementation and tests](../../calminer/services/calculations.py), [tests/services/test_calculations_opex.py](../../calminer/tests/services/test_calculations_opex.py)
- [Integration coverage](../../calminer/tests/integration/test_opex_calculations.py)