Add comprehensive documentation for CalMiner, including architecture, development setup, MVP features, implementation plan, and testing strategy

This commit is contained in:
2025-10-20 16:25:39 +02:00
parent 218b0ba58d
commit 328910a985
6 changed files with 236 additions and 0 deletions

32
docs/architecture.md Normal file
View File

@@ -0,0 +1,32 @@
# Architecture Documentation
## Overview
CalMiner is a web application for planning mining projects, estimating costs, returns, and profitability. It uses Monte Carlo simulations for risk analysis and supports multiple scenarios.
## System Components
- **Frontend**: Web interface for user interaction (to be defined).
- **Backend**: Python API server (e.g., FastAPI) handling business logic.
- **Database**: PostgreSQL with schema `bricsium_platform` (see `structure.sql`).
- **Simulation Engine**: Python-based Monte Carlo runs and stochastic calculations.
## Data Flow
1. User inputs scenario parameters via frontend.
2. Backend validates and stores in database.
3. Simulation engine runs Monte Carlo iterations using stochastic variables.
4. Results stored in `simulation_result` table.
5. Frontend displays outputs like NPV, IRR, EBITDA.
## Database Architecture
- Schema: `bricsium_platform`
- Key tables: Scenarios, parameters, consumptions, outputs, simulations.
- Relationships: Foreign keys link scenarios to parameters, consumptions, and results.
## Next Steps
- Define API endpoints.
- Implement simulation logic.
- Add authentication and user management.

46
docs/development_setup.md Normal file
View File

@@ -0,0 +1,46 @@
# Development Setup Guide
## Prerequisites
- Python (version 3.10+)
- PostgreSQL (version 13+)
- Git
## Database Setup
1. Install PostgreSQL and create a database named `calminer`.
2. Create schema `bricsium_platform`:
```sql
CREATE SCHEMA bricsium_platform;
```
3. Load the schema from `structure.sql`:
```bash
psql -d calminer -f structure.sql
```
## Backend Setup
1. Clone the repo.
2. Create a virtual environment: `python -m venv .venv`
3. Activate it: `.venv\Scripts\activate` (Windows) or `source .venv/bin/activate` (Linux/Mac)
4. Install dependencies: `pip install -r requirements.txt`
5. Set up environment variables (e.g., DB connection string in .env).
6. Run migrations if any.
7. Start server: `python main.py` or `uvicorn main:app --reload`
## Frontend Setup
(TBD - add when implemented)
## Running Locally
- Backend: `uvicorn main:app --reload`
- Frontend: (TBD)
## Testing
- Run tests: `pytest`

View File

@@ -0,0 +1,43 @@
# Implementation Plan
## Feature: Scenario Creation and Management
### Scenario Implementation Steps
1. Create `models/scenario.py` for DB interactions.
2. Implement API endpoints in `routes/scenarios.py`: GET, POST, PUT, DELETE.
3. Add frontend component `components/ScenarioForm.vue` for CRUD.
4. Update `README.md` with API docs.
## Feature: Parameter Input and Validation
### Parameter Implementation Steps
1. Define parameter schemas in `models/parameters.py`.
2. Create validation middleware in `middleware/validation.py`.
3. Build input form in `components/ParameterInput.vue`.
4. Integrate with scenario management.
## Feature: Monte Carlo Simulation Run
### Simulation Implementation Steps
1. Implement simulation logic in `services/simulation.py`.
2. Add endpoint `POST /api/simulations/run`.
3. Store results in `models/simulation_result.py`.
4. Add progress tracking UI.
## Feature: Basic Reporting
### Reporting Implementation Steps
1. Create report service `services/reporting.py`.
2. Build dashboard component `components/Dashboard.vue`.
3. Fetch data from simulation results.
4. Add charts using Chart.js.
## Next Steps
- Assign issues in GitHub.
- Estimate effort for each step.
- Start with backend models.

18
docs/mvp.md Normal file
View File

@@ -0,0 +1,18 @@
# MVP Features
## Prioritized Features
1. **Scenario Creation and Management** (High Priority): Allow users to create, edit, and delete scenarios. Rationale: Core functionality for what-if analysis.
2. **Parameter Input and Validation** (High Priority): Input process parameters with validation. Rationale: Ensures data integrity for simulations.
3. **Monte Carlo Simulation Run** (High Priority): Execute simulations and store results. Rationale: Key differentiator for risk analysis.
4. **Basic Reporting** (Medium Priority): Display NPV, IRR, EBITDA from simulation results. Rationale: Essential for decision-making.
5. **Cost Tracking Dashboard** (Medium Priority): Visualize CAPEX and OPEX. Rationale: Helps monitor expenses.
6. **Consumption Monitoring** (Low Priority): Track resource consumption. Rationale: Useful for optimization.
7. **User Authentication** (Medium Priority): Basic login/logout. Rationale: Security for multi-user access.
8. **Export Results** (Low Priority): Export simulation data to CSV/PDF. Rationale: For external analysis.
## Rationale for Prioritization
- High: Core simulation and scenario features first.
- Medium: Reporting and auth for usability.
- Low: Nice-to-haves after basics.

29
docs/testing.md Normal file
View File

@@ -0,0 +1,29 @@
# Testing Strategy
## Overview
CalMiner will use a combination of unit, integration, and end-to-end tests to ensure quality.
## Frameworks
- **Backend**: pytest for unit and integration tests.
- **Frontend**: (TBD) pytest with Selenium or Playwright.
- **Database**: pytest fixtures with psycopg2 for DB tests.
## Test Types
- **Unit Tests**: Test individual functions/modules.
- **Integration Tests**: Test API endpoints and DB interactions.
- **E2E Tests**: (Future) Playwright for full user flows.
## CI/CD
- Use GitHub Actions for CI.
- Run tests on pull requests.
- Code coverage target: 80% (using pytest-cov).
## Running Tests
- Unit: `pytest tests/unit/`
- Integration: `pytest tests/integration/`
- All: `pytest`