Files
calminer/docs/implementation_plan.md

6.0 KiB

Implementation Plan

This document outlines the MVP features and implementation steps for CalMiner.

Refer to the following for context alignment:

Project Setup

  1. Connect to PostgreSQL database with schema calminer.
  2. Create and activate a virtual environment and install dependencies via requirements.txt.
  3. Define environment variables in .env, including DATABASE_URL.
  4. Configure FastAPI entrypoint in main.py to include routers.

Feature: Scenario Management

Implementation Steps

  1. Create models/scenario.py for scenario CRUD.
  2. Implement API endpoints in routes/scenarios.py (GET, POST, PUT, DELETE).
  3. Write unit tests in tests/unit/test_scenario.py.
  4. Build UI component components/ScenarioForm.html.

Feature: Process Parameters

Implementation Steps

  1. Create models/parameters.py for process parameters.
  2. Implement Pydantic schemas in routes/parameters.py.
  3. Add validation middleware in middleware/validation.py.
  4. Write unit tests in tests/unit/test_parameter.py.
  5. Build UI component components/ParameterInput.html.

Feature: Stochastic Variables

Implementation Steps

  1. Create models/distribution.py for variable distributions.
  2. Implement API routes in routes/distributions.py.
  3. Write Pydantic schemas and validations.
  4. Write unit tests in tests/unit/test_distribution.py.
  5. Build UI component components/DistributionEditor.html.

Feature: Cost Tracking

Implementation Steps

  1. Create models/capex.py and models/opex.py.
  2. Implement API routes in routes/costs.py.
  3. Write Pydantic schemas for CAPEX/OPEX.
  4. Write unit tests in tests/unit/test_costs.py.
  5. Build UI component components/CostForm.html.

Feature: Consumption Tracking

Implementation Steps

  1. Create models for consumption: chemical_consumption.py, fuel_consumption.py, water_consumption.py, scrap_consumption.py.
  2. Implement API routes in routes/consumption.py.
  3. Write Pydantic schemas for consumption data.
  4. Write unit tests in tests/unit/test_consumption.py.
  5. Build UI component components/ConsumptionDashboard.html.

Feature: Production Output

Implementation Steps

  1. Create models/production_output.py.
  2. Implement API routes in routes/production.py.
  3. Write Pydantic schemas for production output.
  4. Write unit tests in tests/unit/test_production.py.
  5. Build UI component components/ProductionChart.html.

Feature: Equipment Management

Implementation Steps

  1. Create models/equipment.py for equipment data.
  2. Implement API routes in routes/equipment.py.
  3. Write Pydantic schemas for equipment.
  4. Write unit tests in tests/unit/test_equipment.py.
  5. Build UI component components/EquipmentList.html.

Feature: Maintenance Logging

Implementation Steps

  1. Create models/maintenance.py for maintenance events.
  2. Implement API routes in routes/maintenance.py.
  3. Write Pydantic schemas for maintenance logs.
  4. Write unit tests in tests/unit/test_maintenance.py.
  5. Build UI component components/MaintenanceLog.html.

Feature: Monte Carlo Simulation Engine

Implementation Steps

  1. Implement Monte Carlo logic in services/simulation.py.
  2. Persist results in models/simulation_result.py.
  3. Expose endpoint in routes/simulations.py.
  4. Write integration tests in tests/unit/test_simulation.py.
  5. Build UI component components/SimulationRunner.html.

Feature: Reporting / Dashboard

Implementation Steps

  1. Implement report calculations in services/reporting.py.
  2. Add detailed and summary endpoints in routes/reporting.py.
  3. Write unit tests in tests/unit/test_reporting.py.
  4. Enhance UI in components/Dashboard.html with charts.

MVP Feature Analysis (summary)

Goal: Identify core MVP features, acceptance criteria, and quick estimates.

Features:

  • Scenario Management

    • Acceptance: create/read/update/delete scenarios; persist to DB; API coverage with tests.
    • Estimate: 3-5 days (backend + minimal UI).
  • Parameter Input & Validation

    • Acceptance: define parameter schemas, validate inputs, surface errors to API/UI.
    • Estimate: 2-3 days.
  • Monte Carlo Simulation Engine

    • Acceptance: run parameterised simulations, store results, ability to rerun with different seeds, basic progress reporting.
    • Estimate: 1-2 weeks (core engine + persistence).
  • Reporting / Dashboard

    • Acceptance: display simulation outputs (NPV, IRR distributions), basic charts, export CSV.
    • Estimate: 4-7 days.

Edge cases to consider:

  • Large simulation runs (memory / timeouts) — use streaming, chunking, or background workers.
  • DB migration and schema versioning.
  • Authentication/authorization for scenario access.

Next actionable items:

  1. Break Scenario Management into sub-issues (models, routes, tests, simple UI).
  2. Scaffold Parameter Input & Validation (models/parameters.py, middleware, routes, tests).
  3. Prototype the simulation engine with a small deterministic runner and unit tests.
  4. Scaffold Monte Carlo Simulation endpoints (services/simulation.py, routes/simulations.py, tests).
  5. Scaffold Reporting endpoints (services/reporting.py, routes/reporting.py, front-end Dashboard, tests).
  6. Add CI job for tests and coverage.

UI Template Audit (2025-10-20)

  • Existing HTML templates: ScenarioForm.html, ParameterInput.html, and Dashboard.html (reporting summary view).
  • Coverage gaps remain for costs, consumption, production, equipment, maintenance, and simulation workflows—no dedicated templates yet.
  • Shared layout primitives (navigation/header/footer) are absent; current pages duplicate boilerplate markup.
  • Dashboard currently covers reporting metrics but should be wired to a central / route once the shared layout lands.
  • Next steps align with the updated TODO checklist: introduce a base.html, refactor existing templates to extend it, and scaffold placeholder pages for the remaining features.