Add comprehensive documentation for CalMiner, including architecture, development setup, MVP features, implementation plan, and testing strategy

This commit is contained in:
2025-10-20 16:25:39 +02:00
parent 218b0ba58d
commit 328910a985
6 changed files with 236 additions and 0 deletions

View File

@@ -1,3 +1,71 @@
# CalMiner # CalMiner
A web application to plan mining projects and estimate costs, returns and profitability. A web application to plan mining projects and estimate costs, returns and profitability.
## Tech Stack
- **Backend**: Python 3.10+ (FastAPI)
- **Database**: PostgreSQL
- **Frontend**: (TBD)
- **Testing**: pytest
## Planned Features
- **Scenario Management**: The database supports different scenarios for what-if analysis, with parent-child relationships between scenarios.
- **Monte Carlo Simulation**: The system can perform Monte Carlo simulations for risk analysis and probabilistic forecasting.
- **Stochastic Variables**: The model handles uncertainty by defining variables with probability distributions.
- **Cost Tracking**: It tracks capital (`capex`) and operational (`opex`) expenditures.
- **Consumption Tracking**: It monitors the consumption of resources like chemicals, fuel, water, and scrap materials.
- **Production Output**: The database stores production results, including tons produced, recovery rates, and revenue.
- **Process Parameters**: It allows for defining and storing various parameters for different processes and scenarios.
- **Equipment Management**: The system manages equipment and their operational data.
- **Maintenance Logging**: It includes a log for equipment maintenance events.
## Database Objects
The database is composed of several tables that store different types of information. All tables are under schema `bricsium_platform`. See `structure.sql` for full DDL. Here are some of the most important ones:
- **CAPEX** — `bricsium_platform.capex`: Stores data on capital expenditures.
- **OPEX** — `bricsium_platform.opex`: Contains information on operational expenditures.
- **Chemical consumption** — `bricsium_platform.chemical_consumption`: Tracks the consumption of chemical reagents.
- **Fuel consumption** — `bricsium_platform.fuel_consumption`: Records the amount of fuel consumed.
- **Water consumption** — `bricsium_platform.water_consumption`: Monitors the use of water.
- **Scrap consumption** — `bricsium_platform.scrap_consumption`: Tracks the consumption of scrap materials.
- **Production output** — `bricsium_platform.production_output`: Stores data on production output, such as tons produced and recovery rates.
- **Equipment operation** — `bricsium_platform.equipment_operation`: Contains operational data for each piece of equipment.
- **Ore batch** — `bricsium_platform.ore_batch`: Stores information on ore batches, including their grade and other characteristics.
- **Exchange rate** — `bricsium_platform.exchange_rate`: Contains currency exchange rates.
- **Simulation result** — `bricsium_platform.simulation_result`: Stores the results of the Monte Carlo simulations.
## Static Parameters
These are values that are not expected to change frequently and are used for configuration purposes. Some examples include:
- **Currencies**: `currency_code`, `currency_name`.
- **Distribution types**: `distribution_name`.
- **Units**: `unit_name`, `unit_symbol`, `unit_system`, `conversion_to_base`.
- **Parameter categories**: `category_name`.
- **Material types**: `type_name`, `category`.
- **Chemical reagents**: `reagent_name`, `chemical_formula`.
- **Fuel**: `fuel_name`.
- **Water**: `water_type`.
- **Scrap material**: `scrap_name`.
## Variables
These are dynamic data points that are recorded over time and used in calculations and simulations. Some examples include:
- **CAPEX**: `amount`.
- **OPEX**: `amount`.
- **Chemical consumption**: `quantity`, `efficiency`, `waste_factor`.
- **Fuel consumption**: `quantity`.
- **Water consumption**: `quantity`.
- **Scrap consumption**: `quantity`.
- **Production output**: `tons_produced`, `recovery_rate`, `metal_content`, `metallurgical_loss`, `net_revenue`.
- **Equipment operation**: `hours_operated`, `downtime_hours`.
- **Ore batch**: `ore_grade`, `moisture`, `sulfur`, `chlorine`.
- **Exchange rate**: `rate`.
- **Parameter values**: `value`.
- **Simulation result**: NPV (`npv`), IRR (`irr`), EBITDA (`ebitda`), `net_revenue`.
- **Cementation parameters**: `temperature`, pH (`ph`), `reaction_time`, `copper_concentration`, `iron_surface_area`.
- **Precipitate product**: `density`, `melting_point`, `boiling_point`.

32
docs/architecture.md Normal file
View File

@@ -0,0 +1,32 @@
# Architecture Documentation
## Overview
CalMiner is a web application for planning mining projects, estimating costs, returns, and profitability. It uses Monte Carlo simulations for risk analysis and supports multiple scenarios.
## System Components
- **Frontend**: Web interface for user interaction (to be defined).
- **Backend**: Python API server (e.g., FastAPI) handling business logic.
- **Database**: PostgreSQL with schema `bricsium_platform` (see `structure.sql`).
- **Simulation Engine**: Python-based Monte Carlo runs and stochastic calculations.
## Data Flow
1. User inputs scenario parameters via frontend.
2. Backend validates and stores in database.
3. Simulation engine runs Monte Carlo iterations using stochastic variables.
4. Results stored in `simulation_result` table.
5. Frontend displays outputs like NPV, IRR, EBITDA.
## Database Architecture
- Schema: `bricsium_platform`
- Key tables: Scenarios, parameters, consumptions, outputs, simulations.
- Relationships: Foreign keys link scenarios to parameters, consumptions, and results.
## Next Steps
- Define API endpoints.
- Implement simulation logic.
- Add authentication and user management.

46
docs/development_setup.md Normal file
View File

@@ -0,0 +1,46 @@
# Development Setup Guide
## Prerequisites
- Python (version 3.10+)
- PostgreSQL (version 13+)
- Git
## Database Setup
1. Install PostgreSQL and create a database named `calminer`.
2. Create schema `bricsium_platform`:
```sql
CREATE SCHEMA bricsium_platform;
```
3. Load the schema from `structure.sql`:
```bash
psql -d calminer -f structure.sql
```
## Backend Setup
1. Clone the repo.
2. Create a virtual environment: `python -m venv .venv`
3. Activate it: `.venv\Scripts\activate` (Windows) or `source .venv/bin/activate` (Linux/Mac)
4. Install dependencies: `pip install -r requirements.txt`
5. Set up environment variables (e.g., DB connection string in .env).
6. Run migrations if any.
7. Start server: `python main.py` or `uvicorn main:app --reload`
## Frontend Setup
(TBD - add when implemented)
## Running Locally
- Backend: `uvicorn main:app --reload`
- Frontend: (TBD)
## Testing
- Run tests: `pytest`

View File

@@ -0,0 +1,43 @@
# Implementation Plan
## Feature: Scenario Creation and Management
### Scenario Implementation Steps
1. Create `models/scenario.py` for DB interactions.
2. Implement API endpoints in `routes/scenarios.py`: GET, POST, PUT, DELETE.
3. Add frontend component `components/ScenarioForm.vue` for CRUD.
4. Update `README.md` with API docs.
## Feature: Parameter Input and Validation
### Parameter Implementation Steps
1. Define parameter schemas in `models/parameters.py`.
2. Create validation middleware in `middleware/validation.py`.
3. Build input form in `components/ParameterInput.vue`.
4. Integrate with scenario management.
## Feature: Monte Carlo Simulation Run
### Simulation Implementation Steps
1. Implement simulation logic in `services/simulation.py`.
2. Add endpoint `POST /api/simulations/run`.
3. Store results in `models/simulation_result.py`.
4. Add progress tracking UI.
## Feature: Basic Reporting
### Reporting Implementation Steps
1. Create report service `services/reporting.py`.
2. Build dashboard component `components/Dashboard.vue`.
3. Fetch data from simulation results.
4. Add charts using Chart.js.
## Next Steps
- Assign issues in GitHub.
- Estimate effort for each step.
- Start with backend models.

18
docs/mvp.md Normal file
View File

@@ -0,0 +1,18 @@
# MVP Features
## Prioritized Features
1. **Scenario Creation and Management** (High Priority): Allow users to create, edit, and delete scenarios. Rationale: Core functionality for what-if analysis.
2. **Parameter Input and Validation** (High Priority): Input process parameters with validation. Rationale: Ensures data integrity for simulations.
3. **Monte Carlo Simulation Run** (High Priority): Execute simulations and store results. Rationale: Key differentiator for risk analysis.
4. **Basic Reporting** (Medium Priority): Display NPV, IRR, EBITDA from simulation results. Rationale: Essential for decision-making.
5. **Cost Tracking Dashboard** (Medium Priority): Visualize CAPEX and OPEX. Rationale: Helps monitor expenses.
6. **Consumption Monitoring** (Low Priority): Track resource consumption. Rationale: Useful for optimization.
7. **User Authentication** (Medium Priority): Basic login/logout. Rationale: Security for multi-user access.
8. **Export Results** (Low Priority): Export simulation data to CSV/PDF. Rationale: For external analysis.
## Rationale for Prioritization
- High: Core simulation and scenario features first.
- Medium: Reporting and auth for usability.
- Low: Nice-to-haves after basics.

29
docs/testing.md Normal file
View File

@@ -0,0 +1,29 @@
# Testing Strategy
## Overview
CalMiner will use a combination of unit, integration, and end-to-end tests to ensure quality.
## Frameworks
- **Backend**: pytest for unit and integration tests.
- **Frontend**: (TBD) pytest with Selenium or Playwright.
- **Database**: pytest fixtures with psycopg2 for DB tests.
## Test Types
- **Unit Tests**: Test individual functions/modules.
- **Integration Tests**: Test API endpoints and DB interactions.
- **E2E Tests**: (Future) Playwright for full user flows.
## CI/CD
- Use GitHub Actions for CI.
- Run tests on pull requests.
- Code coverage target: 80% (using pytest-cov).
## Running Tests
- Unit: `pytest tests/unit/`
- Integration: `pytest tests/integration/`
- All: `pytest`