Refactor and enhance CalMiner application

- Updated README.md to reflect new features and usage instructions.
- Removed deprecated Dashboard.html component and integrated dashboard functionality directly into the main application.
- Revised architecture documentation for clarity and added module map and request flow diagrams.
- Enhanced maintenance model to include equipment association and cost tracking.
- Updated requirements.txt to include new dependencies (httpx, pandas, numpy).
- Improved consumption, maintenance, production, and reporting routes with better validation and response handling.
- Added unit tests for maintenance and production routes, ensuring proper CRUD operations and validation.
- Enhanced reporting service to calculate and return detailed summary statistics.
- Redesigned Dashboard.html for improved user experience and integrated Chart.js for visualizing simulation results.
This commit is contained in:
2025-10-20 20:53:55 +02:00
parent fee857637f
commit e73a987d25
19 changed files with 794 additions and 184 deletions

1
.gitignore vendored
View File

@@ -41,3 +41,4 @@ logs/
# SQLite database
*.sqlite3
test.db

View File

@@ -10,15 +10,15 @@ A range of features are implemented to support these functionalities.
## Features
- **Scenario Management**: The database supports different scenarios for what-if analysis, with parent-child relationships between scenarios.
- **Monte Carlo Simulation**: The system can perform Monte Carlo simulations for risk analysis and probabilistic forecasting.
- **Stochastic Variables**: The model handles uncertainty by defining variables with probability distributions.
- **Cost Tracking**: It tracks capital (`capex`) and operational (`opex`) expenditures.
- **Consumption Tracking**: It monitors the consumption of resources like chemicals, fuel, water, and scrap materials.
- **Production Output**: The database stores production results, including tons produced, recovery rates, and revenue.
- **Process Parameters**: It allows for defining and storing various parameters for different processes and scenarios.
- **Equipment Management**: The system manages equipment and their operational data.
- **Maintenance Logging**: It includes a log for equipment maintenance events.
- **Scenario Management**: Manage multiple mining scenarios with independent parameter sets and outputs.
- **Process Parameters**: Define and persist process inputs via FastAPI endpoints and template-driven forms.
- **Cost Tracking**: Capture capital (`capex`) and operational (`opex`) expenditures per scenario.
- **Consumption Tracking**: Record resource consumption (chemicals, fuel, water, scrap) tied to scenarios.
- **Production Output**: Store production metrics such as tonnage, recovery, and revenue drivers.
- **Equipment Management**: Register scenario-specific equipment inventories.
- **Maintenance Logging**: Log maintenance events against equipment with dates and costs.
- **Reporting Dashboard**: Surface aggregated statistics for simulation outputs with an interactive Chart.js dashboard.
- **Monte Carlo Simulation (in progress)**: Services and routes are scaffolded for future stochastic analysis.
## Architecture
@@ -66,10 +66,36 @@ pip install -r requirements.txt
uvicorn main:app --reload
```
## Usage Overview
- **API base URL**: `http://localhost:8000/api`
- **Key routes**:
- `POST /api/scenarios/` create scenarios
- `POST /api/parameters/` manage process parameters
- `POST /api/costs/capex` and `POST /api/costs/opex` capture project costs
- `POST /api/consumption/` add consumption entries
- `POST /api/production/` register production output
- `POST /api/equipment/` create equipment records
- `POST /api/maintenance/` log maintenance events
- `POST /api/reporting/summary` aggregate simulation results
### Dashboard Preview
1. Start the FastAPI server and navigate to `/dashboard` (served by `templates/Dashboard.html`).
2. Use the "Load Sample Data" button to populate the JSON textarea with demo results.
3. Select "Refresh Dashboard" to view calculated statistics and a distribution chart sourced from `/api/reporting/summary`.
4. Paste your own simulation outputs (array of objects containing a numeric `result` property) to visualize custom runs.
## Testing
Testing guidelines and best practices are outlined in [docs/testing.md](docs/testing.md).
To execute the unit test suite:
```powershell
pytest
```
## Database Objects
The database is composed of several tables that store different types of information.

View File

@@ -1,25 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>CalMiner Dashboard</title>
</head>
<body>
<h1>Simulation Results Dashboard</h1>
<div id="report-summary"></div>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script>
// TODO: fetch summary report and render charts
async function loadReport() {
const response = await fetch('/api/reporting/summary', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify([])
});
const data = await response.json();
document.getElementById('report-summary').innerText = JSON.stringify(data);
}
loadReport();
</script>
</body>
</html>

View File

@@ -2,40 +2,47 @@
## Overview
CalMiner is a web application for planning mining projects, estimating costs, returns, and profitability. It uses Monte Carlo simulations for risk analysis and supports multiple scenarios.
CalMiner is a FastAPI application that collects mining project inputs, persists scenario-specific records, and surfaces aggregated insights. The platform targets Monte Carlo driven planning, with deterministic CRUD features in place and simulation logic staged for future work.
## System Components
- **Frontend**: Web interface for user interaction (to be defined).
- **Backend**: Python API server (e.g., FastAPI) handling business logic.
- **Database**: PostgreSQL.
- **Configuration**: Environment variables and settings loaded via `python-dotenv` and stored in `config/` directory.
- **Simulation Engine**: Python-based Monte Carlo runs and stochastic calculations.
- **API Routes**: FastAPI routers defined in `routes/` for scenarios, simulations, consumptions, and reporting endpoints.
- **FastAPI backend** (`main.py`, `routes/`): hosts REST endpoints for scenarios, parameters, costs, consumption, production, equipment, maintenance, simulations, and reporting. Each router encapsulates request/response schemas and DB access patterns.
- **Service layer** (`services/`): houses business logic. `services/reporting.py` produces statistical summaries, while `services/simulation.py` provides the Monte Carlo integration point.
- **Persistence** (`models/`, `config/database.py`): SQLAlchemy models map to PostgreSQL tables in schema `bricsium_platform`. Relationships connect scenarios to derived domain entities.
- **Presentation** (`templates/`, `components/`): server-rendered views support data entry (scenario and parameter forms) and the dashboard visualization powered by Chart.js.
- **Middleware** (`middleware/validation.py`): applies JSON validation before requests reach routers.
- **Testing** (`tests/unit/`): pytest suite covering route and service behavior.
## Data Flow
## Runtime Flow
1. User inputs scenario parameters via frontend.
2. Backend validates and stores in database.
3. Simulation engine runs Monte Carlo iterations using stochastic variables.
4. Results stored in `simulation_result` table.
5. Frontend displays outputs like NPV, IRR, EBITDA.
1. Users navigate to form templates or API clients to manage scenarios, parameters, and operational data.
2. FastAPI routers validate payloads with Pydantic models, then delegate to SQLAlchemy sessions for persistence.
3. Simulation runs (placeholder `services/simulation.py`) will consume stored parameters to emit iteration results via `/api/simulations/run`.
4. Reporting requests POST simulation outputs to `/api/reporting/summary`; the reporting service calculates aggregates (count, min/max, mean, median, percentiles, standard deviation).
5. `templates/Dashboard.html` fetches summaries, renders metric cards, and plots distribution charts with Chart.js for stakeholder review.
## Database Architecture
## Data Model Highlights
- Schema: `bricsium_platform`
- Key tables include:
- `scenario`: central entity describing a mining scenario; owns relationships to cost, consumption, production, equipment, and maintenance tables.
- `capex`, `opex`: monetary tracking linked to scenarios.
- `consumption`: resource usage entries parameterized by scenario and description.
- `production_output`: production metrics per scenario.
- `equipment` and `maintenance`: equipment inventory and maintenance events with dates/costs.
- `simulation_result`: staging table for future Monte Carlo outputs (not yet populated by `run_simulation`).
- `scenario` (scenario metadata and parameters)
- `capex`, `opex` (capital and operational expenditures)
- `chemical_consumption`, `fuel_consumption`, `water_consumption`, `scrap_consumption`
- `production_output`, `equipment_operation`, `ore_batch`
- `exchange_rate`, `simulation_result`
Foreign keys secure referential integrity between domain tables and their scenarios, enabling per-scenario analytics.
- Relationships: Foreign keys link scenarios to parameters, consumptions, and simulation results.
## Integrations and Future Work
## Next Steps
- **Monte Carlo engine**: `services/simulation.py` will incorporate stochastic sampling (e.g., NumPy, SciPy) to populate `simulation_result` and feed reporting.
- **Persistence of results**: `/api/simulations/run` currently returns in-memory results; next iteration should persist to `simulation_result` and reference scenarios.
- **Authentication**: not yet implemented; all endpoints are open.
- **Deployment**: documentation focuses on local development; containerization and CI/CD pipelines remain to be defined.
- Define API endpoints.
- Implement simulation logic.
- Add authentication and user management.
For extended diagrams and setup instructions reference:
- [docs/development_setup.md](development_setup.md) — environment provisioning and tooling.
- [docs/testing.md](testing.md) — pytest workflow and coverage expectations.
- [docs/mvp.md](mvp.md) — roadmap and milestone scope.
- [docs/implementation_plan.md](implementation_plan.md) — feature breakdown aligned with the TODO tracker.
- [docs/architecture_overview.md](architecture_overview.md) — supplementary module map and request flow diagram.

View File

@@ -0,0 +1,44 @@
# Architecture Overview
This overview complements `docs/architecture.md` with a high-level map of CalMiner's module layout and request flow.
## Module Map
- `main.py`: FastAPI entry point bootstrapping routers and middleware.
- `models/`: SQLAlchemy declarative models for all database tables. Key modules:
- `scenario.py`: central scenario entity with relationships to cost, consumption, production, equipment, maintenance, and simulation results.
- `capex.py`, `opex.py`: financial expenditures tied to scenarios.
- `consumption.py`, `production_output.py`: operational data tables.
- `equipment.py`, `maintenance.py`: asset management models.
- `routes/`: REST endpoints grouped by domain (scenarios, parameters, costs, consumption, production, equipment, maintenance, reporting, simulations, UI).
- `services/`: business logic abstractions. `reporting.py` supplies summary statistics; `simulation.py` hosts the Monte Carlo extension point.
- `middleware/validation.py`: request JSON validation prior to hitting routers.
- `templates/`: Jinja2 templates for UI (scenario form, parameter input, dashboard).
## Request Flow
```mermaid
graph TD
A[Browser / API Client] -->|HTTP| B[FastAPI Router]
B --> C[Dependency Injection]
C --> D[SQLAlchemy Session]
B --> E[Service Layer]
E --> D
E --> F[Reporting / Simulation Logic]
D --> G[PostgreSQL]
F --> H[Summary Response]
G --> H
H --> A
```
## Dashboard Interaction
1. User loads `/dashboard`, served by `templates/Dashboard.html`.
2. Template fetches `/api/reporting/summary` with sample or user-provided simulation outputs.
3. Response metrics populate the summary grid and Chart.js visualization.
## Simulation Roadmap
- Implement stochastic sampling in `services/simulation.py` (e.g., NumPy random draws based on parameter distributions).
- Store iterations in `models/simulation_result.py` via `/api/simulations/run`.
- Feed persisted results into reporting for downstream analytics and historical comparisons.

View File

@@ -1,4 +1,4 @@
from sqlalchemy import Column, Integer, String, DateTime, ForeignKey, func
from sqlalchemy import Column, Date, Float, ForeignKey, Integer, String
from sqlalchemy.orm import relationship
from config.database import Base
@@ -7,11 +7,17 @@ class Maintenance(Base):
__tablename__ = "maintenance"
id = Column(Integer, primary_key=True, index=True)
equipment_id = Column(Integer, ForeignKey("equipment.id"), nullable=False)
scenario_id = Column(Integer, ForeignKey("scenario.id"), nullable=False)
performed_at = Column(DateTime(timezone=True), server_default=func.now())
details = Column(String, nullable=True)
maintenance_date = Column(Date, nullable=False)
description = Column(String, nullable=True)
cost = Column(Float, nullable=False)
equipment = relationship("Equipment")
scenario = relationship("Scenario", back_populates="maintenance_items")
def __repr__(self):
return f"<Maintenance id={self.id} scenario_id={self.scenario_id} performed_at={self.performed_at}>"
def __repr__(self) -> str:
return (
f"<Maintenance id={self.id} equipment_id={self.equipment_id} "
f"scenario_id={self.scenario_id} date={self.maintenance_date} cost={self.cost}>"
)

View File

@@ -3,6 +3,10 @@ uvicorn
sqlalchemy
psycopg2-binary
python-dotenv
httpx
jinja2
pandas
numpy
pytest
pytest-cov
jinja2
pytest-httpx

View File

@@ -1,10 +1,13 @@
from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.orm import Session
from typing import List, Optional
from pydantic import BaseModel
from fastapi import APIRouter, Depends, status
from pydantic import BaseModel, PositiveFloat
from sqlalchemy.orm import Session
from config.database import SessionLocal
from models.consumption import Consumption
router = APIRouter(prefix="/api/consumption", tags=["Consumption"])
@@ -16,22 +19,25 @@ def get_db():
db.close()
# Pydantic schemas
class ConsumptionCreate(BaseModel):
class ConsumptionBase(BaseModel):
scenario_id: int
amount: float
amount: PositiveFloat
description: Optional[str] = None
class ConsumptionRead(ConsumptionCreate):
class ConsumptionCreate(ConsumptionBase):
pass
class ConsumptionRead(ConsumptionBase):
id: int
class Config:
orm_mode = True
@router.post("/", response_model=ConsumptionRead)
async def create_consumption(item: ConsumptionCreate, db: Session = Depends(get_db)):
@router.post("/", response_model=ConsumptionRead, status_code=status.HTTP_201_CREATED)
def create_consumption(item: ConsumptionCreate, db: Session = Depends(get_db)):
db_item = Consumption(**item.dict())
db.add(db_item)
db.commit()
@@ -40,5 +46,5 @@ async def create_consumption(item: ConsumptionCreate, db: Session = Depends(get_
@router.get("/", response_model=List[ConsumptionRead])
async def list_consumption(db: Session = Depends(get_db)):
def list_consumption(db: Session = Depends(get_db)):
return db.query(Consumption).all()

View File

@@ -1,11 +1,14 @@
from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.orm import Session
from datetime import date
from typing import List, Optional
from pydantic import BaseModel
from datetime import datetime
from fastapi import APIRouter, Depends, HTTPException, status
from pydantic import BaseModel, PositiveFloat
from sqlalchemy.orm import Session
from config.database import SessionLocal
from models.maintenance import Maintenance
router = APIRouter(prefix="/api/maintenance", tags=["Maintenance"])
@@ -17,29 +20,75 @@ def get_db():
db.close()
# Pydantic schemas
class MaintenanceCreate(BaseModel):
class MaintenanceBase(BaseModel):
equipment_id: int
scenario_id: int
details: Optional[str] = None
maintenance_date: date
description: Optional[str] = None
cost: PositiveFloat
class MaintenanceRead(MaintenanceCreate):
class MaintenanceCreate(MaintenanceBase):
pass
class MaintenanceUpdate(MaintenanceBase):
pass
class MaintenanceRead(MaintenanceBase):
id: int
performed_at: datetime
class Config:
orm_mode = True
@router.post("/", response_model=MaintenanceRead)
async def create_maintenance(item: MaintenanceCreate, db: Session = Depends(get_db)):
db_item = Maintenance(**item.dict())
db.add(db_item)
def _get_maintenance_or_404(db: Session, maintenance_id: int) -> Maintenance:
maintenance = db.query(Maintenance).filter(
Maintenance.id == maintenance_id).first()
if maintenance is None:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Maintenance record {maintenance_id} not found",
)
return maintenance
@router.post("/", response_model=MaintenanceRead, status_code=status.HTTP_201_CREATED)
def create_maintenance(maintenance: MaintenanceCreate, db: Session = Depends(get_db)):
db_maintenance = Maintenance(**maintenance.dict())
db.add(db_maintenance)
db.commit()
db.refresh(db_item)
return db_item
db.refresh(db_maintenance)
return db_maintenance
@router.get("/", response_model=List[MaintenanceRead])
async def list_maintenance(db: Session = Depends(get_db)):
return db.query(Maintenance).all()
def list_maintenance(skip: int = 0, limit: int = 100, db: Session = Depends(get_db)):
return db.query(Maintenance).offset(skip).limit(limit).all()
@router.get("/{maintenance_id}", response_model=MaintenanceRead)
def get_maintenance(maintenance_id: int, db: Session = Depends(get_db)):
return _get_maintenance_or_404(db, maintenance_id)
@router.put("/{maintenance_id}", response_model=MaintenanceRead)
def update_maintenance(
maintenance_id: int,
payload: MaintenanceUpdate,
db: Session = Depends(get_db),
):
db_maintenance = _get_maintenance_or_404(db, maintenance_id)
for field, value in payload.dict().items():
setattr(db_maintenance, field, value)
db.commit()
db.refresh(db_maintenance)
return db_maintenance
@router.delete("/{maintenance_id}", status_code=status.HTTP_204_NO_CONTENT)
def delete_maintenance(maintenance_id: int, db: Session = Depends(get_db)):
db_maintenance = _get_maintenance_or_404(db, maintenance_id)
db.delete(db_maintenance)
db.commit()

View File

@@ -1,10 +1,13 @@
from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.orm import Session
from typing import List, Optional
from pydantic import BaseModel
from fastapi import APIRouter, Depends, status
from pydantic import BaseModel, PositiveFloat
from sqlalchemy.orm import Session
from config.database import SessionLocal
from models.production_output import ProductionOutput
router = APIRouter(prefix="/api/production", tags=["Production"])
@@ -16,22 +19,25 @@ def get_db():
db.close()
# Pydantic schemas
class ProductionOutputCreate(BaseModel):
class ProductionOutputBase(BaseModel):
scenario_id: int
amount: float
amount: PositiveFloat
description: Optional[str] = None
class ProductionOutputRead(ProductionOutputCreate):
class ProductionOutputCreate(ProductionOutputBase):
pass
class ProductionOutputRead(ProductionOutputBase):
id: int
class Config:
orm_mode = True
@router.post("/", response_model=ProductionOutputRead)
async def create_production(item: ProductionOutputCreate, db: Session = Depends(get_db)):
@router.post("/", response_model=ProductionOutputRead, status_code=status.HTTP_201_CREATED)
def create_production(item: ProductionOutputCreate, db: Session = Depends(get_db)):
db_item = ProductionOutput(**item.dict())
db.add(db_item)
db.commit()
@@ -40,5 +46,5 @@ async def create_production(item: ProductionOutputCreate, db: Session = Depends(
@router.get("/", response_model=List[ProductionOutputRead])
async def list_production(db: Session = Depends(get_db)):
def list_production(db: Session = Depends(get_db)):
return db.query(ProductionOutput).all()

View File

@@ -1,9 +1,11 @@
from fastapi import APIRouter, HTTPException, Request
from typing import Dict, Any
from typing import Any, Dict, List
from fastapi import APIRouter, HTTPException, Request, status
from pydantic import BaseModel
from services.reporting import generate_report
from sqlalchemy.orm import Session
from config.database import SessionLocal
from services.reporting import generate_report
router = APIRouter(prefix="/api/reporting", tags=["Reporting"])
@@ -16,11 +18,53 @@ def get_db():
db.close()
@router.post("/summary", response_model=Dict[str, float])
def _validate_payload(payload: Any) -> List[Dict[str, float]]:
if not isinstance(payload, list):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Invalid input format",
)
validated: List[Dict[str, float]] = []
for index, item in enumerate(payload):
if not isinstance(item, dict):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Entry at index {index} must be an object",
)
value = item.get("result")
if not isinstance(value, (int, float)):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Entry at index {index} must include numeric 'result'",
)
validated.append({"result": float(value)})
return validated
class ReportSummary(BaseModel):
count: int
mean: float
median: float
min: float
max: float
std_dev: float
percentile_10: float
percentile_90: float
@router.post("/summary", response_model=ReportSummary)
async def summary_report(request: Request):
# Read raw JSON to handle invalid input formats
data = await request.json()
if not isinstance(data, list):
raise HTTPException(status_code=400, detail="Invalid input format")
report = generate_report(data)
return report
payload = await request.json()
validated_payload = _validate_payload(payload)
summary = generate_report(validated_payload)
return ReportSummary(
count=int(summary["count"]),
mean=float(summary["mean"]),
median=float(summary["median"]),
min=float(summary["min"]),
max=float(summary["max"]),
std_dev=float(summary["std_dev"]),
percentile_10=float(summary["percentile_10"]),
percentile_90=float(summary["percentile_90"]),
)

View File

@@ -37,13 +37,16 @@ def get_db():
@router.post("/", response_model=ScenarioRead)
def create_scenario(scenario: ScenarioCreate, db: Session = Depends(get_db)):
print(f"Creating scenario with name: {scenario.name}")
db_s = db.query(Scenario).filter(Scenario.name == scenario.name).first()
if db_s:
print(f"Scenario with name {scenario.name} already exists.")
raise HTTPException(status_code=400, detail="Scenario already exists")
new_s = Scenario(name=scenario.name, description=scenario.description)
db.add(new_s)
db.commit()
db.refresh(new_s)
print(f"Scenario with name {scenario.name} created successfully.")
return new_s

View File

@@ -18,9 +18,3 @@ async def scenario_form(request: Request):
async def parameter_form(request: Request):
"""Render the parameter input form."""
return templates.TemplateResponse("ParameterInput.html", {"request": request})
@router.get("/", response_class=HTMLResponse)
async def dashboard(request: Request):
"""Render the central dashboard page."""
return templates.TemplateResponse("Dashboard.html", {"request": request})

View File

@@ -1,15 +1,57 @@
from typing import List, Dict
from statistics import mean, median, pstdev
from typing import Dict, Iterable, List, Union
def generate_report(simulation_results: List[Dict[str, float]]) -> Dict[str, float]:
"""
Generate summary report from simulation results.
def _extract_results(simulation_results: Iterable[Dict[str, float]]) -> List[float]:
values: List[float] = []
for item in simulation_results:
if not isinstance(item, dict):
continue
value = item.get("result")
if isinstance(value, (int, float)):
values.append(float(value))
return values
Args:
simulation_results: List of dicts with 'iteration' and 'result'.
Returns:
Dictionary with summary statistics (e.g., mean, median).
"""
# TODO: implement reporting logic (e.g., calculate mean, median, percentiles)
return {}
def _percentile(values: List[float], percentile: float) -> float:
if not values:
return 0.0
sorted_values = sorted(values)
if len(sorted_values) == 1:
return sorted_values[0]
index = (percentile / 100) * (len(sorted_values) - 1)
lower = int(index)
upper = min(lower + 1, len(sorted_values) - 1)
weight = index - lower
return sorted_values[lower] * (1 - weight) + sorted_values[upper] * weight
def generate_report(simulation_results: List[Dict[str, float]]) -> Dict[str, Union[float, int]]:
"""Aggregate basic statistics for simulation outputs."""
values = _extract_results(simulation_results)
if not values:
return {
"count": 0,
"mean": 0.0,
"median": 0.0,
"min": 0.0,
"max": 0.0,
"std_dev": 0.0,
"percentile_10": 0.0,
"percentile_90": 0.0,
}
summary: Dict[str, Union[float, int]] = {
"count": len(values),
"mean": mean(values),
"median": median(values),
"min": min(values),
"max": max(values),
"percentile_10": _percentile(values, 10),
"percentile_90": _percentile(values, 90),
}
summary["std_dev"] = pstdev(values) if len(values) > 1 else 0.0
return summary

View File

@@ -1,25 +1,240 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<head>
<meta charset="UTF-8" />
<title>CalMiner Dashboard</title>
</head>
<body>
<style>
body {
font-family: Arial, sans-serif;
margin: 2rem;
background-color: #f4f5f7;
color: #1f2933;
}
h1 {
font-size: 2rem;
margin-bottom: 1rem;
}
.summary-card {
background: #ffffff;
border-radius: 8px;
box-shadow: 0 2px 6px rgba(0, 0, 0, 0.08);
padding: 1.5rem;
margin-bottom: 2rem;
}
.summary-grid {
display: grid;
gap: 1.5rem;
grid-template-columns: repeat(auto-fit, minmax(160px, 1fr));
}
.metric {
text-align: center;
}
.metric-label {
font-size: 0.8rem;
text-transform: uppercase;
letter-spacing: 0.05em;
color: #52606d;
}
.metric-value {
font-size: 1.4rem;
font-weight: bold;
margin-top: 0.4rem;
}
#chart-container {
background: #ffffff;
border-radius: 8px;
box-shadow: 0 2px 6px rgba(0, 0, 0, 0.08);
padding: 1.5rem;
}
#error-message {
color: #b91d47;
margin-top: 1rem;
}
</style>
</head>
<body>
<h1>Simulation Results Dashboard</h1>
<div id="report-summary"></div>
<div class="summary-card">
<h2>Summary Statistics</h2>
<div id="summary-grid" class="summary-grid"></div>
<p id="error-message" hidden></p>
</div>
<div class="summary-card">
<h2>Sample Results Input</h2>
<p>
Provide simulation outputs as JSON (array of objects containing the
<code>result</code> field) and refresh the dashboard to preview metrics.
</p>
<textarea
id="results-input"
rows="6"
style="width: 100%; font-family: monospace"
></textarea>
<div style="margin-top: 1rem; display: flex; gap: 0.5rem">
<button id="load-sample" type="button">Load Sample Data</button>
<button id="refresh-dashboard" type="button">Refresh Dashboard</button>
</div>
</div>
<div id="chart-container">
<h2>Result Distribution</h2>
<canvas id="summary-chart" height="120"></canvas>
</div>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script>
// TODO: fetch summary report and render charts
async function loadReport() {
const response = await fetch('/api/reporting/summary', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify([])
});
const data = await response.json();
document.getElementById('report-summary').innerText = JSON.stringify(data);
const SUMMARY_FIELDS = [
{ key: "mean", label: "Mean" },
{ key: "median", label: "Median" },
{ key: "min", label: "Min" },
{ key: "max", label: "Max" },
{ key: "std_dev", label: "Std Dev" },
{ key: "percentile_10", label: "10th Percentile" },
{ key: "percentile_90", label: "90th Percentile" },
];
async function fetchSummary(results) {
const response = await fetch("/api/reporting/summary", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(results),
});
if (!response.ok) {
const message = await response.json();
throw new Error(message.detail || "Failed to retrieve summary");
}
loadReport();
return response.json();
}
function getResultsFromInput() {
const textarea = document.getElementById("results-input");
try {
const parsed = JSON.parse(textarea.value || "[]");
if (!Array.isArray(parsed)) {
throw new Error("Input must be a JSON array");
}
return parsed;
} catch (error) {
throw new Error(`Invalid JSON input: ${error.message}`);
}
}
function renderSummary(summary) {
const grid = document.getElementById("summary-grid");
grid.innerHTML = "";
SUMMARY_FIELDS.forEach(({ key, label }) => {
const value = summary[key] ?? 0;
const metric = document.createElement("div");
metric.className = "metric";
metric.innerHTML = `
<div class="metric-label">${label}</div>
<div class="metric-value">${value.toFixed(2)}</div>
`;
grid.appendChild(metric);
});
}
let chartInstance = null;
function renderChart(summary) {
const ctx = document.getElementById("summary-chart").getContext("2d");
const dataPoints = [
summary.min,
summary.percentile_10,
summary.median,
summary.mean,
summary.percentile_90,
summary.max,
].map((value) => Number(value ?? 0));
if (chartInstance) {
chartInstance.destroy();
}
chartInstance = new Chart(ctx, {
type: "line",
data: {
labels: ["Min", "P10", "Median", "Mean", "P90", "Max"],
datasets: [
{
label: "Result Summary",
data: dataPoints,
borderColor: "#2563eb",
backgroundColor: "rgba(37, 99, 235, 0.2)",
tension: 0.3,
fill: true,
},
],
},
options: {
plugins: {
legend: { display: false },
},
scales: {
y: {
beginAtZero: true,
},
},
},
});
}
function showError(message) {
const errorElement = document.getElementById("error-message");
errorElement.textContent = message;
errorElement.hidden = false;
}
function attachHandlers() {
const loadSampleButton = document.getElementById("load-sample");
const refreshButton = document.getElementById("refresh-dashboard");
const sampleData = JSON.stringify(
[
{ result: 18.2 },
{ result: 22.1 },
{ result: 30.4 },
{ result: 25.7 },
{ result: 28.3 },
],
null,
2
);
loadSampleButton.addEventListener("click", () => {
document.getElementById("results-input").value = sampleData;
});
refreshButton.addEventListener("click", async () => {
try {
const results = getResultsFromInput();
const summary = await fetchSummary(results);
renderSummary(summary);
renderChart(summary);
document.getElementById("error-message").hidden = true;
} catch (error) {
console.error(error);
showError(error.message);
}
});
document.getElementById("results-input").value = sampleData;
}
async function initializeDashboard() {
try {
attachHandlers();
const initialResults = getResultsFromInput();
const summary = await fetchSummary(initialResults);
renderSummary(summary);
renderChart(summary);
} catch (error) {
console.error(error);
showError(error.message);
}
}
initializeDashboard();
</script>
</body>
</html>
</body>
</html>

View File

@@ -29,7 +29,7 @@ def test_create_and_list_consumption():
cons_payload = {"scenario_id": sid, "amount": 250.0,
"description": "Monthly consumption"}
resp2 = client.post("/api/consumption/", json=cons_payload)
assert resp2.status_code == 200
assert resp2.status_code == 201
cons = resp2.json()
assert cons["scenario_id"] == sid
assert cons["amount"] == 250.0

View File

@@ -1,8 +1,30 @@
from fastapi.testclient import TestClient
from main import app
from config.database import Base, engine
from uuid import uuid4
# Setup and teardown
from fastapi.testclient import TestClient
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from config.database import Base
from main import app
from routes import (
consumption,
costs,
distributions,
equipment,
maintenance,
parameters,
production,
reporting,
scenarios,
simulations,
ui,
)
SQLALCHEMY_DATABASE_URL = "sqlite:///./test.db"
engine = create_engine(SQLALCHEMY_DATABASE_URL, connect_args={
"check_same_thread": False})
TestingSessionLocal = sessionmaker(
autocommit=False, autoflush=False, bind=engine)
def setup_module(module):
@@ -13,29 +35,145 @@ def teardown_module(module):
Base.metadata.drop_all(bind=engine)
def override_get_db():
db = TestingSessionLocal()
try:
yield db
finally:
db.close()
app.dependency_overrides[maintenance.get_db] = override_get_db
app.dependency_overrides[equipment.get_db] = override_get_db
app.dependency_overrides[scenarios.get_db] = override_get_db
app.dependency_overrides[distributions.get_db] = override_get_db
app.dependency_overrides[parameters.get_db] = override_get_db
app.dependency_overrides[costs.get_db] = override_get_db
app.dependency_overrides[consumption.get_db] = override_get_db
app.dependency_overrides[production.get_db] = override_get_db
app.dependency_overrides[reporting.get_db] = override_get_db
app.dependency_overrides[simulations.get_db] = override_get_db
app.include_router(maintenance.router)
app.include_router(equipment.router)
app.include_router(scenarios.router)
app.include_router(distributions.router)
app.include_router(ui.router)
app.include_router(parameters.router)
app.include_router(costs.router)
app.include_router(consumption.router)
app.include_router(production.router)
app.include_router(reporting.router)
app.include_router(simulations.router)
client = TestClient(app)
def _create_scenario_and_equipment():
scenario_payload = {
"name": f"Test Scenario {uuid4()}",
"description": "Scenario for maintenance tests",
}
scenario_response = client.post("/api/scenarios/", json=scenario_payload)
assert scenario_response.status_code == 200
scenario_id = scenario_response.json()["id"]
equipment_payload = {
"scenario_id": scenario_id,
"name": f"Test Equipment {uuid4()}",
"description": "Equipment linked to maintenance",
}
equipment_response = client.post("/api/equipment/", json=equipment_payload)
assert equipment_response.status_code == 200
equipment_id = equipment_response.json()["id"]
return scenario_id, equipment_id
def _create_maintenance_payload(equipment_id: int, scenario_id: int, description: str):
return {
"equipment_id": equipment_id,
"scenario_id": scenario_id,
"maintenance_date": "2025-10-20",
"description": description,
"cost": 100.0,
}
def test_create_and_list_maintenance():
# Create a scenario to attach maintenance
resp = client.post(
"/api/scenarios/", json={"name": "MaintScenario", "description": "maintenance scenario"}
scenario_id, equipment_id = _create_scenario_and_equipment()
payload = _create_maintenance_payload(
equipment_id, scenario_id, "Create maintenance")
response = client.post("/api/maintenance/", json=payload)
assert response.status_code == 201
created = response.json()
assert created["equipment_id"] == equipment_id
assert created["scenario_id"] == scenario_id
assert created["description"] == "Create maintenance"
list_response = client.get("/api/maintenance/")
assert list_response.status_code == 200
items = list_response.json()
assert any(item["id"] == created["id"] for item in items)
def test_get_maintenance():
scenario_id, equipment_id = _create_scenario_and_equipment()
payload = _create_maintenance_payload(
equipment_id, scenario_id, "Retrieve maintenance"
)
assert resp.status_code == 200
scenario = resp.json()
sid = scenario["id"]
create_response = client.post("/api/maintenance/", json=payload)
assert create_response.status_code == 201
maintenance_id = create_response.json()["id"]
# Create Maintenance record
maint_payload = {"scenario_id": sid, "details": "Routine check"}
resp2 = client.post("/api/maintenance/", json=maint_payload)
assert resp2.status_code == 200
maint = resp2.json()
assert maint["scenario_id"] == sid
assert maint["details"] == "Routine check"
response = client.get(f"/api/maintenance/{maintenance_id}")
assert response.status_code == 200
data = response.json()
assert data["id"] == maintenance_id
assert data["equipment_id"] == equipment_id
assert data["description"] == "Retrieve maintenance"
# List Maintenance records
resp3 = client.get("/api/maintenance/")
assert resp3.status_code == 200
data = resp3.json()
assert any(item["details"] ==
"Routine check" and item["scenario_id"] == sid for item in data)
def test_update_maintenance():
scenario_id, equipment_id = _create_scenario_and_equipment()
create_response = client.post(
"/api/maintenance/",
json=_create_maintenance_payload(
equipment_id, scenario_id, "Maintenance before update"
),
)
assert create_response.status_code == 201
maintenance_id = create_response.json()["id"]
update_payload = {
"equipment_id": equipment_id,
"scenario_id": scenario_id,
"maintenance_date": "2025-11-01",
"description": "Maintenance after update",
"cost": 250.0,
}
response = client.put(
f"/api/maintenance/{maintenance_id}", json=update_payload)
assert response.status_code == 200
updated = response.json()
assert updated["maintenance_date"] == "2025-11-01"
assert updated["description"] == "Maintenance after update"
assert updated["cost"] == 250.0
def test_delete_maintenance():
scenario_id, equipment_id = _create_scenario_and_equipment()
create_response = client.post(
"/api/maintenance/",
json=_create_maintenance_payload(
equipment_id, scenario_id, "Delete maintenance"),
)
assert create_response.status_code == 201
maintenance_id = create_response.json()["id"]
delete_response = client.delete(f"/api/maintenance/{maintenance_id}")
assert delete_response.status_code == 204
get_response = client.get(f"/api/maintenance/{maintenance_id}")
assert get_response.status_code == 404

View File

@@ -3,14 +3,19 @@ from main import app
from config.database import Base, engine
# Setup and teardown
def setup_module(module):
Base.metadata.create_all(bind=engine)
def teardown_module(module):
Base.metadata.drop_all(bind=engine)
client = TestClient(app)
def test_create_and_list_production_output():
# Create a scenario to attach production output
resp = client.post(
@@ -21,9 +26,10 @@ def test_create_and_list_production_output():
sid = scenario["id"]
# Create Production Output item
prod_payload = {"scenario_id": sid, "amount": 300.0, "description": "Daily output"}
prod_payload = {"scenario_id": sid,
"amount": 300.0, "description": "Daily output"}
resp2 = client.post("/api/production/", json=prod_payload)
assert resp2.status_code == 200
assert resp2.status_code == 201
prod = resp2.json()
assert prod["scenario_id"] == sid
assert prod["amount"] == 300.0
@@ -32,4 +38,5 @@ def test_create_and_list_production_output():
resp3 = client.get("/api/production/")
assert resp3.status_code == 200
data = resp3.json()
assert any(item["amount"] == 300.0 and item["scenario_id"] == sid for item in data)
assert any(item["amount"] == 300.0 and item["scenario_id"]
== sid for item in data)

View File

@@ -1,15 +1,38 @@
from services.reporting import generate_report
from routes.reporting import router
from fastapi.testclient import TestClient
from main import app
import pytest
# Function test
from main import app
from services.reporting import generate_report
def test_generate_report_empty():
report = generate_report([])
assert isinstance(report, dict)
assert report == {
"count": 0,
"mean": 0.0,
"median": 0.0,
"min": 0.0,
"max": 0.0,
"std_dev": 0.0,
"percentile_10": 0.0,
"percentile_90": 0.0,
}
def test_generate_report_with_values():
values = [{"iteration": 1, "result": 10.0}, {
"iteration": 2, "result": 20.0}, {"iteration": 3, "result": 30.0}]
report = generate_report(values)
assert report["count"] == 3
assert report["mean"] == pytest.approx(20.0)
assert report["median"] == pytest.approx(20.0)
assert report["min"] == pytest.approx(10.0)
assert report["max"] == pytest.approx(30.0)
assert report["std_dev"] == pytest.approx(8.1649658, rel=1e-6)
assert report["percentile_10"] == pytest.approx(12.0)
assert report["percentile_90"] == pytest.approx(28.0)
# Endpoint test
def test_reporting_endpoint_invalid_input():
client = TestClient(app)
resp = client.post("/api/reporting/summary", json={})
@@ -19,9 +42,29 @@ def test_reporting_endpoint_invalid_input():
def test_reporting_endpoint_success():
client = TestClient(app)
# Minimal input: list of dicts
input_data = [{"iteration": 1, "result": 10.0}]
input_data = [
{"iteration": 1, "result": 10.0},
{"iteration": 2, "result": 20.0},
{"iteration": 3, "result": 30.0},
]
resp = client.post("/api/reporting/summary", json=input_data)
assert resp.status_code == 200
data = resp.json()
assert isinstance(data, dict)
assert data["count"] == 3
assert data["mean"] == pytest.approx(20.0)
@pytest.mark.parametrize(
"payload,expected_detail",
[
(["not-a-dict"], "Entry at index 0 must be an object"),
([{"iteration": 1}], "Entry at index 0 must include numeric 'result'"),
([{"iteration": 1, "result": "bad"}],
"Entry at index 0 must include numeric 'result'"),
],
)
def test_reporting_endpoint_validation_errors(payload, expected_detail):
client = TestClient(app)
resp = client.post("/api/reporting/summary", json=payload)
assert resp.status_code == 400
assert resp.json()["detail"] == expected_detail