Add initial implementation of CalMiner with project structure, environment setup, and core features
- Create .env.example for environment variables - Update README with project structure and development setup instructions - Implement FastAPI application with API routes for scenarios and parameters - Add database models for scenarios, parameters, and simulation results - Introduce validation middleware for JSON requests - Create services for running simulations and generating reports - Add testing strategy and directory structure in documentation
This commit is contained in:
@@ -8,8 +8,10 @@ CalMiner is a web application for planning mining projects, estimating costs, re
|
||||
|
||||
- **Frontend**: Web interface for user interaction (to be defined).
|
||||
- **Backend**: Python API server (e.g., FastAPI) handling business logic.
|
||||
- **Database**: PostgreSQL with schema `bricsium_platform` (see `structure.sql`).
|
||||
- **Database**: PostgreSQL.
|
||||
- **Configuration**: Environment variables and settings loaded via `python-dotenv` and stored in `config/` directory.
|
||||
- **Simulation Engine**: Python-based Monte Carlo runs and stochastic calculations.
|
||||
- **API Routes**: FastAPI routers defined in `routes/` for scenarios, simulations, consumptions, and reporting endpoints.
|
||||
|
||||
## Data Flow
|
||||
|
||||
@@ -22,8 +24,15 @@ CalMiner is a web application for planning mining projects, estimating costs, re
|
||||
## Database Architecture
|
||||
|
||||
- Schema: `bricsium_platform`
|
||||
- Key tables: Scenarios, parameters, consumptions, outputs, simulations.
|
||||
- Relationships: Foreign keys link scenarios to parameters, consumptions, and results.
|
||||
- Key tables include:
|
||||
|
||||
- `scenario` (scenario metadata and parameters)
|
||||
- `capex`, `opex` (capital and operational expenditures)
|
||||
- `chemical_consumption`, `fuel_consumption`, `water_consumption`, `scrap_consumption`
|
||||
- `production_output`, `equipment_operation`, `ore_batch`
|
||||
- `exchange_rate`, `simulation_result`
|
||||
|
||||
- Relationships: Foreign keys link scenarios to parameters, consumptions, and simulation results.
|
||||
|
||||
## Next Steps
|
||||
|
||||
|
||||
@@ -6,41 +6,66 @@
|
||||
- PostgreSQL (version 13+)
|
||||
- Git
|
||||
|
||||
## Clone and Project Setup
|
||||
|
||||
```powershell
|
||||
# Clone the repository
|
||||
git clone https://git.allucanget.biz/allucanget/calminer.git
|
||||
cd calminer
|
||||
```
|
||||
|
||||
## Virtual Environment
|
||||
|
||||
```powershell
|
||||
# Create and activate a virtual environment
|
||||
python -m venv .venv
|
||||
.\.venv\Scripts\Activate.ps1
|
||||
```
|
||||
|
||||
## Install Dependencies
|
||||
|
||||
```powershell
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
## Database Setup
|
||||
|
||||
1. Install PostgreSQL and create a database named `calminer`.
|
||||
|
||||
2. Create schema `bricsium_platform`:
|
||||
1. Create database user:
|
||||
|
||||
```sql
|
||||
CREATE SCHEMA bricsium_platform;
|
||||
CREATE USER calminer_user WITH PASSWORD 'your_password';
|
||||
```
|
||||
|
||||
3. Load the schema from `structure.sql`:
|
||||
2. Create database:
|
||||
|
||||
```bash
|
||||
psql -d calminer -f structure.sql
|
||||
```sql
|
||||
CREATE DATABASE calminer;
|
||||
```
|
||||
|
||||
## Backend Setup
|
||||
## Environment Variables
|
||||
|
||||
1. Clone the repo.
|
||||
2. Create a virtual environment: `python -m venv .venv`
|
||||
3. Activate it: `.venv\Scripts\activate` (Windows) or `source .venv/bin/activate` (Linux/Mac)
|
||||
4. Install dependencies: `pip install -r requirements.txt`
|
||||
5. Set up environment variables (e.g., DB connection string in .env).
|
||||
6. Run migrations if any.
|
||||
7. Start server: `python main.py` or `uvicorn main:app --reload`
|
||||
1. Copy `.env.example` to `.env` at project root.
|
||||
2. Edit `.env` to set database connection string:
|
||||
|
||||
## Frontend Setup
|
||||
```dotenv
|
||||
DATABASE_URL=postgresql://<user>:<password>@localhost:5432/calminer
|
||||
```
|
||||
|
||||
(TBD - add when implemented)
|
||||
3. The application uses `python-dotenv` to load these variables.
|
||||
|
||||
## Running Locally
|
||||
## Running the Application
|
||||
|
||||
- Backend: `uvicorn main:app --reload`
|
||||
- Frontend: (TBD)
|
||||
```powershell
|
||||
# Start the FastAPI server
|
||||
uvicorn main:app --reload
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
- Run tests: `pytest`
|
||||
```powershell
|
||||
pytest
|
||||
```
|
||||
|
||||
## Frontend Setup
|
||||
|
||||
(TBD - add when frontend implemented)
|
||||
|
||||
@@ -1,43 +1,153 @@
|
||||
# Implementation Plan
|
||||
|
||||
## Feature: Scenario Creation and Management
|
||||
This document outlines the MVP features and implementation steps for CalMiner.
|
||||
|
||||
### Scenario Implementation Steps
|
||||
Refer to the following for context alignment:
|
||||
|
||||
1. Create `models/scenario.py` for DB interactions.
|
||||
2. Implement API endpoints in `routes/scenarios.py`: GET, POST, PUT, DELETE.
|
||||
3. Add frontend component `components/ScenarioForm.html` for CRUD.
|
||||
4. Update `README.md` with API docs.
|
||||
- System architecture: [docs/architecture.md](architecture.md)
|
||||
- Development setup: [docs/development_setup.md](development_setup.md)
|
||||
|
||||
## Feature: Parameter Input and Validation
|
||||
## Project Setup
|
||||
|
||||
### Parameter Implementation Steps
|
||||
1. Connect to PostgreSQL database with schema `calminer`.
|
||||
2. Create and activate a virtual environment and install dependencies via `requirements.txt`.
|
||||
3. Define environment variables in `.env`, including `DATABASE_URL`.
|
||||
4. Configure FastAPI entrypoint in `main.py` to include routers.
|
||||
|
||||
1. Define parameter schemas in `models/parameters.py`.
|
||||
2. Create validation middleware in `middleware/validation.py`.
|
||||
3. Build input form in `components/ParameterInput.html`.
|
||||
4. Integrate with scenario management.
|
||||
## Feature: Scenario Management
|
||||
|
||||
## Feature: Monte Carlo Simulation Run
|
||||
### Implementation Steps
|
||||
|
||||
### Simulation Implementation Steps
|
||||
1. Create `models/scenario.py` for scenario CRUD.
|
||||
2. Implement API endpoints in `routes/scenarios.py` (GET, POST, PUT, DELETE).
|
||||
3. Write unit tests in `tests/unit/test_scenario.py`.
|
||||
4. Build UI component `components/ScenarioForm.html`.
|
||||
|
||||
1. Implement simulation logic in `services/simulation.py`.
|
||||
2. Add endpoint `POST /api/simulations/run`.
|
||||
3. Store results in `models/simulation_result.py`.
|
||||
4. Add progress tracking UI.
|
||||
## Feature: Process Parameters
|
||||
|
||||
## Feature: Basic Reporting
|
||||
### Implementation Steps
|
||||
|
||||
### Reporting Implementation Steps
|
||||
1. Create `models/parameters.py` for process parameters.
|
||||
2. Implement Pydantic schemas in `routes/parameters.py`.
|
||||
3. Add validation middleware in `middleware/validation.py`.
|
||||
4. Write unit tests in `tests/unit/test_parameter.py`.
|
||||
5. Build UI component `components/ParameterInput.html`.
|
||||
|
||||
1. Create report service `services/reporting.py`.
|
||||
2. Build dashboard component `components/Dashboard.html`.
|
||||
3. Fetch data from simulation results.
|
||||
4. Add charts using Chart.js.
|
||||
## Feature: Stochastic Variables
|
||||
|
||||
## Next Steps
|
||||
### Implementation Steps
|
||||
|
||||
- Assign issues in GitHub.
|
||||
- Estimate effort for each step.
|
||||
- Start with backend models.
|
||||
1. Create `models/distribution.py` for variable distributions.
|
||||
2. Implement API routes in `routes/distributions.py`.
|
||||
3. Write Pydantic schemas and validations.
|
||||
4. Write unit tests in `tests/unit/test_distribution.py`.
|
||||
5. Build UI component `components/DistributionEditor.html`.
|
||||
|
||||
## Feature: Cost Tracking
|
||||
|
||||
### Implementation Steps
|
||||
|
||||
1. Create `models/capex.py` and `models/opex.py`.
|
||||
2. Implement API routes in `routes/costs.py`.
|
||||
3. Write Pydantic schemas for CAPEX/OPEX.
|
||||
4. Write unit tests in `tests/unit/test_costs.py`.
|
||||
5. Build UI component `components/CostForm.html`.
|
||||
|
||||
## Feature: Consumption Tracking
|
||||
|
||||
### Implementation Steps
|
||||
|
||||
1. Create models for consumption: `chemical_consumption.py`, `fuel_consumption.py`, `water_consumption.py`, `scrap_consumption.py`.
|
||||
2. Implement API routes in `routes/consumption.py`.
|
||||
3. Write Pydantic schemas for consumption data.
|
||||
4. Write unit tests in `tests/unit/test_consumption.py`.
|
||||
5. Build UI component `components/ConsumptionDashboard.html`.
|
||||
|
||||
## Feature: Production Output
|
||||
|
||||
### Implementation Steps
|
||||
|
||||
1. Create `models/production_output.py`.
|
||||
2. Implement API routes in `routes/production.py`.
|
||||
3. Write Pydantic schemas for production output.
|
||||
4. Write unit tests in `tests/unit/test_production.py`.
|
||||
5. Build UI component `components/ProductionChart.html`.
|
||||
|
||||
## Feature: Equipment Management
|
||||
|
||||
### Implementation Steps
|
||||
|
||||
1. Create `models/equipment.py` for equipment data.
|
||||
2. Implement API routes in `routes/equipment.py`.
|
||||
3. Write Pydantic schemas for equipment.
|
||||
4. Write unit tests in `tests/unit/test_equipment.py`.
|
||||
5. Build UI component `components/EquipmentList.html`.
|
||||
|
||||
## Feature: Maintenance Logging
|
||||
|
||||
### Implementation Steps
|
||||
|
||||
1. Create `models/maintenance.py` for maintenance events.
|
||||
2. Implement API routes in `routes/maintenance.py`.
|
||||
3. Write Pydantic schemas for maintenance logs.
|
||||
4. Write unit tests in `tests/unit/test_maintenance.py`.
|
||||
5. Build UI component `components/MaintenanceLog.html`.
|
||||
|
||||
## Feature: Monte Carlo Simulation Engine
|
||||
|
||||
### Implementation Steps
|
||||
|
||||
1. Implement Monte Carlo logic in `services/simulation.py`.
|
||||
2. Persist results in `models/simulation_result.py`.
|
||||
3. Expose endpoint in `routes/simulations.py`.
|
||||
4. Write integration tests in `tests/unit/test_simulation.py`.
|
||||
5. Build UI component `components/SimulationRunner.html`.
|
||||
|
||||
## Feature: Reporting / Dashboard
|
||||
|
||||
### Implementation Steps
|
||||
|
||||
1. Implement report calculations in `services/reporting.py`.
|
||||
2. Add detailed and summary endpoints in `routes/reporting.py`.
|
||||
3. Write unit tests in `tests/unit/test_reporting.py`.
|
||||
4. Enhance UI in `components/Dashboard.html` with charts.
|
||||
|
||||
## MVP Feature Analysis (summary)
|
||||
|
||||
Goal: Identify core MVP features, acceptance criteria, and quick estimates.
|
||||
|
||||
Features:
|
||||
|
||||
- Scenario Management
|
||||
|
||||
- Acceptance: create/read/update/delete scenarios; persist to DB; API coverage with tests.
|
||||
- Estimate: 3-5 days (backend + minimal UI).
|
||||
|
||||
- Parameter Input & Validation
|
||||
|
||||
- Acceptance: define parameter schemas, validate inputs, surface errors to API/UI.
|
||||
- Estimate: 2-3 days.
|
||||
|
||||
- Monte Carlo Simulation Engine
|
||||
|
||||
- Acceptance: run parameterised simulations, store results, ability to rerun with different seeds, basic progress reporting.
|
||||
- Estimate: 1-2 weeks (core engine + persistence).
|
||||
|
||||
- Reporting / Dashboard
|
||||
- Acceptance: display simulation outputs (NPV, IRR distributions), basic charts, export CSV.
|
||||
- Estimate: 4-7 days.
|
||||
|
||||
Edge cases to consider:
|
||||
|
||||
- Large simulation runs (memory / timeouts) — use streaming, chunking, or background workers.
|
||||
- DB migration and schema versioning.
|
||||
- Authentication/authorization for scenario access.
|
||||
|
||||
Next actionable items:
|
||||
|
||||
1. Break Scenario Management into sub-issues (models, routes, tests, simple UI).
|
||||
2. Scaffold Parameter Input & Validation (models/parameters.py, middleware, routes, tests).
|
||||
3. Prototype the simulation engine with a small deterministic runner and unit tests.
|
||||
4. Scaffold Monte Carlo Simulation endpoints (`services/simulation.py`, `routes/simulations.py`, tests).
|
||||
5. Scaffold Reporting endpoints (`services/reporting.py`, `routes/reporting.py`, front-end Dashboard, tests).
|
||||
6. Add CI job for tests and coverage.
|
||||
|
||||
@@ -27,3 +27,48 @@ CalMiner will use a combination of unit, integration, and end-to-end tests to en
|
||||
- Unit: `pytest tests/unit/`
|
||||
- Integration: `pytest tests/integration/`
|
||||
- All: `pytest`
|
||||
|
||||
## Test Directory Structure
|
||||
|
||||
Organize tests under the `tests/` directory mirroring the application structure:
|
||||
|
||||
```bash
|
||||
tests/
|
||||
unit/
|
||||
test_<module>.py
|
||||
integration/
|
||||
test_<endpoint>.py
|
||||
fixtures/
|
||||
conftest.py
|
||||
```
|
||||
|
||||
## Writing Tests
|
||||
|
||||
- Name tests with the `test_` prefix.
|
||||
- Group related tests in classes or modules.
|
||||
- Use descriptive assertion messages.
|
||||
|
||||
## Fixtures and Test Data
|
||||
|
||||
- Define reusable fixtures in `tests/fixtures/conftest.py`.
|
||||
- Use temporary in-memory databases or isolated schemas for DB tests.
|
||||
- Load sample data via fixtures for consistent test environments.
|
||||
|
||||
## Mocking and Dependency Injection
|
||||
|
||||
- Use `unittest.mock` to mock external dependencies.
|
||||
- Inject dependencies via function parameters or FastAPI's dependency overrides in tests.
|
||||
|
||||
## Code Coverage
|
||||
|
||||
- Install `pytest-cov` to generate coverage reports.
|
||||
- Run with coverage: `pytest --cov=calminer --cov-report=html`.
|
||||
- Ensure coverage meets the 80% threshold.
|
||||
|
||||
## CI Integration
|
||||
|
||||
- Configure GitHub Actions workflow in `.github/workflows/ci.yml` to:
|
||||
- Install dependencies
|
||||
- Run `pytest` with coverage
|
||||
- Fail on coverage <80%
|
||||
- Upload coverage artifact
|
||||
|
||||
Reference in New Issue
Block a user