Enhance testing framework and UI feedback
- Updated architecture documentation to include details on UI rendering checks and Playwright end-to-end tests. - Revised testing documentation to specify Playwright for frontend E2E tests and added details on running tests. - Implemented feedback mechanism in scenario form for successful creation notifications. - Added feedback div in ScenarioForm.html for user notifications. - Created new fixtures for Playwright tests to manage server and browser instances. - Developed comprehensive E2E tests for consumption, costs, equipment, maintenance, production, and scenarios. - Added smoke tests to verify UI page loading and form submissions. - Enhanced unit tests for simulation and validation, including new tests for report generation and validation errors. - Created new test files for router validation to ensure consistent error handling. - Established a new test suite for UI routes to validate dashboard and reporting functionalities. - Implemented validation tests to ensure proper handling of JSON payloads.
This commit is contained in:
@@ -16,7 +16,7 @@ The backend leverages SQLAlchemy for ORM mapping to a PostgreSQL database.
|
||||
- **Presentation** (`templates/`, `components/`): server-rendered views extend a shared `base.html` layout with a persistent left sidebar, pull global styles from `static/css/main.css`, and surface data entry (scenario and parameter forms) alongside the Chart.js-powered dashboard.
|
||||
- **Reusable partials** (`templates/partials/components.html`): macro library that standardises select inputs, feedback/empty states, and table wrappers so pages remain consistent while keeping DOM hooks stable for existing JavaScript modules.
|
||||
- **Middleware** (`middleware/validation.py`): applies JSON validation before requests reach routers.
|
||||
- **Testing** (`tests/unit/`): pytest suite covering route and service behavior.
|
||||
- **Testing** (`tests/unit/`): pytest suite covering route and service behavior, including UI rendering checks and negative-path router validation tests to ensure consistent HTTP error semantics. Playwright end-to-end coverage is planned for core smoke flows (dashboard load, scenario inputs, reporting) and will attach in CI once scaffolding is completed.
|
||||
|
||||
## Runtime Flow
|
||||
|
||||
|
||||
@@ -7,14 +7,14 @@ CalMiner will use a combination of unit, integration, and end-to-end tests to en
|
||||
## Frameworks
|
||||
|
||||
- **Backend**: pytest for unit and integration tests.
|
||||
- **Frontend**: (TBD) pytest with Selenium or Playwright.
|
||||
- **Frontend**: pytest with Playwright for E2E tests.
|
||||
- **Database**: pytest fixtures with psycopg2 for DB tests.
|
||||
|
||||
## Test Types
|
||||
|
||||
- **Unit Tests**: Test individual functions/modules.
|
||||
- **Integration Tests**: Test API endpoints and DB interactions.
|
||||
- **E2E Tests**: (Future) Playwright for full user flows.
|
||||
- **E2E Tests**: Playwright for full user flows.
|
||||
|
||||
## CI/CD
|
||||
|
||||
@@ -25,19 +25,19 @@ CalMiner will use a combination of unit, integration, and end-to-end tests to en
|
||||
## Running Tests
|
||||
|
||||
- Unit: `pytest tests/unit/`
|
||||
- Integration: `pytest tests/integration/`
|
||||
- E2E: `pytest tests/e2e/`
|
||||
- All: `pytest`
|
||||
|
||||
## Test Directory Structure
|
||||
|
||||
Organize tests under the `tests/` directory mirroring the application structure:
|
||||
|
||||
```bash
|
||||
```text
|
||||
tests/
|
||||
unit/
|
||||
test_<module>.py
|
||||
integration/
|
||||
test_<endpoint>.py
|
||||
e2e/
|
||||
test_<flow>.py
|
||||
fixtures/
|
||||
conftest.py
|
||||
```
|
||||
@@ -53,6 +53,41 @@ tests/
|
||||
- Define reusable fixtures in `tests/fixtures/conftest.py`.
|
||||
- Use temporary in-memory databases or isolated schemas for DB tests.
|
||||
- Load sample data via fixtures for consistent test environments.
|
||||
- Leverage the `seeded_ui_data` fixture in `tests/unit/conftest.py` to populate scenarios with related cost, maintenance, and simulation records for deterministic UI route checks.
|
||||
- Use `tests/unit/test_ui_routes.py` to verify that `/ui/dashboard`, `/ui/scenarios`, and `/ui/reporting` render expected context and that `/ui/dashboard/data` emits aggregated JSON payloads.
|
||||
- Use `tests/unit/test_router_validation.py` to exercise request validation branches for scenario creation, parameter distribution rules, simulation inputs, reporting summaries, and maintenance costs.
|
||||
|
||||
## E2E (Playwright) Tests
|
||||
|
||||
The E2E test suite, located in `tests/e2e/`, uses Playwright to simulate user interactions in a live browser environment. These tests are designed to catch issues in the UI, frontend-backend integration, and overall application flow.
|
||||
|
||||
### Fixtures
|
||||
|
||||
- `live_server`: A session-scoped fixture that launches the FastAPI application in a separate process, making it accessible to the browser.
|
||||
- `playwright_instance`, `browser`, `page`: Standard `pytest-playwright` fixtures for managing the Playwright instance, browser, and individual pages.
|
||||
|
||||
### Smoke Tests
|
||||
|
||||
- **UI Page Loading**: `test_smoke.py` contains a parameterized test that systematically navigates to all UI routes to ensure they load without errors, have the correct title, and display a primary heading.
|
||||
- **Form Submissions**: Each major form in the application has a corresponding test file (e.g., `test_scenarios.py`, `test_costs.py`) that verifies:
|
||||
- The form page loads correctly.
|
||||
- A new item can be created by filling out and submitting the form.
|
||||
- The application provides immediate visual feedback (e.g., a success message).
|
||||
- The UI is dynamically updated to reflect the new item (e.g., a new row in a table).
|
||||
|
||||
### Running E2E Tests
|
||||
|
||||
To run the Playwright tests, use the following command:
|
||||
|
||||
```bash
|
||||
pytest tests/e2e/
|
||||
```
|
||||
|
||||
To run the tests in headed mode and observe the browser interactions, use:
|
||||
|
||||
```bash
|
||||
pytest tests/e2e/ --headed
|
||||
```
|
||||
|
||||
## Mocking and Dependency Injection
|
||||
|
||||
@@ -62,13 +97,15 @@ tests/
|
||||
## Code Coverage
|
||||
|
||||
- Install `pytest-cov` to generate coverage reports.
|
||||
- Run with coverage: `pytest --cov=calminer --cov-report=html`.
|
||||
- Ensure coverage meets the 80% threshold.
|
||||
- Run with coverage: `pytest --cov --cov-report=term` for quick baselines (use `--cov-report=html` when visualizing hotspots).
|
||||
- Target 95%+ overall coverage. Focus on historically low modules: `services/simulation.py`, `services/reporting.py`, `middleware/validation.py`, and `routes/ui.py`.
|
||||
- Recent additions include unit tests that validate Monte Carlo parameter errors, reporting fallbacks, and JSON middleware rejection paths to guard against malformed inputs.
|
||||
|
||||
## CI Integration
|
||||
|
||||
- Configure GitHub Actions workflow in `.github/workflows/ci.yml` to:
|
||||
- Install dependencies
|
||||
- Run `pytest` with coverage
|
||||
- Fail on coverage <80%
|
||||
- Upload coverage artifact
|
||||
- Install dependencies, including Playwright browsers (`playwright install`).
|
||||
- Run `pytest` with coverage for unit tests.
|
||||
- Run `pytest tests/e2e/` for E2E tests.
|
||||
- Fail on coverage <80%.
|
||||
- Upload coverage artifact.
|
||||
|
||||
@@ -65,5 +65,14 @@ document.addEventListener("DOMContentLoaded", () => {
|
||||
|
||||
form.reset();
|
||||
nameInput.focus();
|
||||
|
||||
const feedback = document.getElementById("feedback");
|
||||
if (feedback) {
|
||||
feedback.textContent = `Scenario "${data.name}" created successfully.`;
|
||||
feedback.classList.remove("hidden");
|
||||
setTimeout(() => {
|
||||
feedback.classList.add("hidden");
|
||||
}, 3000);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
@@ -13,6 +13,7 @@ endblock %} {% block content %}
|
||||
</label>
|
||||
<button type="submit" class="btn primary">Create Scenario</button>
|
||||
</form>
|
||||
<div id="feedback" class="feedback hidden" aria-live="polite"></div>
|
||||
<div class="table-container">
|
||||
{% if scenarios %}
|
||||
<table id="scenario-table">
|
||||
|
||||
49
tests/e2e/conftest.py
Normal file
49
tests/e2e/conftest.py
Normal file
@@ -0,0 +1,49 @@
|
||||
import subprocess
|
||||
import time
|
||||
from typing import Generator
|
||||
|
||||
import pytest
|
||||
from playwright.sync_api import Browser, Page, Playwright, sync_playwright
|
||||
|
||||
# Use a different port for the test server to avoid conflicts
|
||||
TEST_PORT = 8001
|
||||
BASE_URL = f"http://localhost:{TEST_PORT}"
|
||||
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def live_server() -> Generator[str, None, None]:
|
||||
"""Launch a live test server in a separate process."""
|
||||
process = subprocess.Popen(
|
||||
["uvicorn", "main:app", f"--port={TEST_PORT}"],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
time.sleep(2) # Give the server a moment to start
|
||||
yield BASE_URL
|
||||
process.terminate()
|
||||
process.wait()
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def playwright_instance() -> Generator[Playwright, None, None]:
|
||||
"""Provide a Playwright instance for the test session."""
|
||||
with sync_playwright() as p:
|
||||
yield p
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def browser(
|
||||
playwright_instance: Playwright,
|
||||
) -> Generator[Browser, None, None]:
|
||||
"""Provide a browser instance for the test session."""
|
||||
browser = playwright_instance.chromium.launch()
|
||||
yield browser
|
||||
browser.close()
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def page(browser: Browser, live_server: str) -> Generator[Page, None, None]:
|
||||
"""Provide a new page for each test."""
|
||||
page = browser.new_page(base_url=live_server)
|
||||
yield page
|
||||
page.close()
|
||||
42
tests/e2e/test_consumption.py
Normal file
42
tests/e2e/test_consumption.py
Normal file
@@ -0,0 +1,42 @@
|
||||
from uuid import uuid4
|
||||
|
||||
from playwright.sync_api import Page, expect
|
||||
|
||||
|
||||
def test_consumption_form_loads(page: Page):
|
||||
"""Verify the consumption form page loads correctly."""
|
||||
page.goto("/ui/consumption")
|
||||
expect(page).to_have_title("CalMiner Consumption")
|
||||
expect(page.locator("h1")).to_have_text("Consumption")
|
||||
|
||||
|
||||
def test_create_consumption_item(page: Page):
|
||||
"""Test creating a new consumption item through the UI."""
|
||||
# First, create a scenario to associate the consumption with.
|
||||
page.goto("/ui/scenarios")
|
||||
scenario_name = f"Consumption Test Scenario {uuid4()}"
|
||||
page.fill("input[name='name']", scenario_name)
|
||||
page.click("button[type='submit']")
|
||||
with page.expect_response("**/api/scenarios/"):
|
||||
pass # Wait for the scenario to be created
|
||||
|
||||
# Now, navigate to the consumption page and add an item.
|
||||
page.goto("/ui/consumption")
|
||||
|
||||
# Create a consumption item.
|
||||
consumption_desc = "Diesel for generators"
|
||||
page.select_option("select[name='scenario_id']", label=scenario_name)
|
||||
page.fill("input[name='description']", consumption_desc)
|
||||
page.fill("input[name='amount']", "5000")
|
||||
page.click("button[type='submit']")
|
||||
|
||||
with page.expect_response("**/api/consumption/") as response_info:
|
||||
pass
|
||||
assert response_info.value.status == 201
|
||||
|
||||
# Verify the new item appears in the table.
|
||||
expect(page.locator(f"tr:has-text('{consumption_desc}')")).to_be_visible()
|
||||
|
||||
# Verify the feedback message.
|
||||
expect(page.locator("#consumption-feedback")
|
||||
).to_have_text("Consumption record saved.")
|
||||
58
tests/e2e/test_costs.py
Normal file
58
tests/e2e/test_costs.py
Normal file
@@ -0,0 +1,58 @@
|
||||
from uuid import uuid4
|
||||
|
||||
from playwright.sync_api import Page, expect
|
||||
|
||||
|
||||
def test_costs_form_loads(page: Page):
|
||||
"""Verify the costs form page loads correctly."""
|
||||
page.goto("/ui/costs")
|
||||
expect(page).to_have_title("CalMiner Costs")
|
||||
expect(page.locator("h1")).to_have_text("Costs")
|
||||
|
||||
|
||||
def test_create_capex_and_opex_items(page: Page):
|
||||
"""Test creating new CAPEX and OPEX items through the UI."""
|
||||
# First, create a scenario to associate the costs with.
|
||||
page.goto("/ui/scenarios")
|
||||
scenario_name = f"Cost Test Scenario {uuid4()}"
|
||||
page.fill("input[name='name']", scenario_name)
|
||||
page.click("button[type='submit']")
|
||||
with page.expect_response("**/api/scenarios/"):
|
||||
pass # Wait for the scenario to be created
|
||||
|
||||
# Now, navigate to the costs page and add CAPEX and OPEX items.
|
||||
page.goto("/ui/costs")
|
||||
|
||||
# Create a CAPEX item.
|
||||
capex_desc = "Initial drilling equipment"
|
||||
page.select_option("select[name='scenario_id']", label=scenario_name)
|
||||
page.fill("input[name='description']", capex_desc)
|
||||
page.fill("input[name='amount']", "150000")
|
||||
page.click("#capex-form button[type='submit']")
|
||||
|
||||
with page.expect_response("**/api/costs/capex") as response_info:
|
||||
pass
|
||||
assert response_info.value.status == 200
|
||||
|
||||
# Create an OPEX item.
|
||||
opex_desc = "Monthly fuel costs"
|
||||
page.select_option("select[name='scenario_id']", label=scenario_name)
|
||||
page.fill("input[name='description']", opex_desc)
|
||||
page.fill("input[name='amount']", "25000")
|
||||
page.click("#opex-form button[type='submit']")
|
||||
|
||||
with page.expect_response("**/api/costs/opex") as response_info:
|
||||
pass
|
||||
assert response_info.value.status == 200
|
||||
|
||||
# Verify the new items appear in their respective tables.
|
||||
expect(page.locator(
|
||||
f"#capex-table tr:has-text('{capex_desc}')")).to_be_visible()
|
||||
expect(page.locator(
|
||||
f"#opex-table tr:has-text('{opex_desc}')")).to_be_visible()
|
||||
|
||||
# Verify the feedback messages.
|
||||
expect(page.locator("#capex-feedback")
|
||||
).to_have_text("Entry saved successfully.")
|
||||
expect(page.locator("#opex-feedback")
|
||||
).to_have_text("Entry saved successfully.")
|
||||
18
tests/e2e/test_dashboard.py
Normal file
18
tests/e2e/test_dashboard.py
Normal file
@@ -0,0 +1,18 @@
|
||||
from playwright.sync_api import Page, expect
|
||||
|
||||
|
||||
def test_dashboard_loads_and_has_title(page: Page):
|
||||
"""Verify the dashboard page loads and the title is correct."""
|
||||
expect(page).to_have_title("CalMiner Dashboard")
|
||||
|
||||
|
||||
def test_dashboard_shows_summary_metrics_panel(page: Page):
|
||||
"""Check that the summary metrics panel is visible."""
|
||||
summary_panel = page.locator("section.panel h2:has-text('Summary Metrics')")
|
||||
expect(summary_panel).to_be_visible()
|
||||
|
||||
|
||||
def test_dashboard_renders_cost_chart(page: Page):
|
||||
"""Ensure the scenario cost chart canvas is present."""
|
||||
cost_chart = page.locator("#scenario-cost-chart")
|
||||
expect(cost_chart).to_be_visible()
|
||||
43
tests/e2e/test_equipment.py
Normal file
43
tests/e2e/test_equipment.py
Normal file
@@ -0,0 +1,43 @@
|
||||
from uuid import uuid4
|
||||
|
||||
from playwright.sync_api import Page, expect
|
||||
|
||||
|
||||
def test_equipment_form_loads(page: Page):
|
||||
"""Verify the equipment form page loads correctly."""
|
||||
page.goto("/ui/equipment")
|
||||
expect(page).to_have_title("CalMiner Equipment")
|
||||
expect(page.locator("h1")).to_have_text("Equipment")
|
||||
|
||||
|
||||
def test_create_equipment_item(page: Page):
|
||||
"""Test creating a new equipment item through the UI."""
|
||||
# First, create a scenario to associate the equipment with.
|
||||
page.goto("/ui/scenarios")
|
||||
scenario_name = f"Equipment Test Scenario {uuid4()}"
|
||||
page.fill("input[name='name']", scenario_name)
|
||||
page.click("button[type='submit']")
|
||||
with page.expect_response("**/api/scenarios/"):
|
||||
pass # Wait for the scenario to be created
|
||||
|
||||
# Now, navigate to the equipment page and add an item.
|
||||
page.goto("/ui/equipment")
|
||||
|
||||
# Create an equipment item.
|
||||
equipment_name = "Haul Truck HT-05"
|
||||
equipment_desc = "Primary haul truck for ore transport."
|
||||
page.select_option("select[name='scenario_id']", label=scenario_name)
|
||||
page.fill("input[name='name']", equipment_name)
|
||||
page.fill("textarea[name='description']", equipment_desc)
|
||||
page.click("button[type='submit']")
|
||||
|
||||
with page.expect_response("**/api/equipment/") as response_info:
|
||||
pass
|
||||
assert response_info.value.status == 200
|
||||
|
||||
# Verify the new item appears in the table.
|
||||
expect(page.locator(f"tr:has-text('{equipment_name}')")).to_be_visible()
|
||||
|
||||
# Verify the feedback message.
|
||||
expect(page.locator("#equipment-feedback")
|
||||
).to_have_text("Equipment saved.")
|
||||
52
tests/e2e/test_maintenance.py
Normal file
52
tests/e2e/test_maintenance.py
Normal file
@@ -0,0 +1,52 @@
|
||||
from uuid import uuid4
|
||||
|
||||
from playwright.sync_api import Page, expect
|
||||
|
||||
|
||||
def test_maintenance_form_loads(page: Page):
|
||||
"""Verify the maintenance form page loads correctly."""
|
||||
page.goto("/ui/maintenance")
|
||||
expect(page).to_have_title("CalMiner Maintenance")
|
||||
expect(page.locator("h1")).to_have_text("Maintenance")
|
||||
|
||||
|
||||
def test_create_maintenance_item(page: Page):
|
||||
"""Test creating a new maintenance item through the UI."""
|
||||
# First, create a scenario and an equipment item.
|
||||
page.goto("/ui/scenarios")
|
||||
scenario_name = f"Maintenance Test Scenario {uuid4()}"
|
||||
page.fill("input[name='name']", scenario_name)
|
||||
page.click("button[type='submit']")
|
||||
with page.expect_response("**/api/scenarios/"):
|
||||
pass
|
||||
|
||||
page.goto("/ui/equipment")
|
||||
equipment_name = f"Excavator EX-12 {uuid4()}"
|
||||
page.select_option("select[name='scenario_id']", label=scenario_name)
|
||||
page.fill("input[name='name']", equipment_name)
|
||||
page.click("button[type='submit']")
|
||||
with page.expect_response("**/api/equipment/"):
|
||||
pass
|
||||
|
||||
# Now, navigate to the maintenance page and add an item.
|
||||
page.goto("/ui/maintenance")
|
||||
|
||||
# Create a maintenance item.
|
||||
maintenance_desc = "Scheduled engine overhaul"
|
||||
page.select_option("select[name='scenario_id']", label=scenario_name)
|
||||
page.select_option("select[name='equipment_id']", label=equipment_name)
|
||||
page.fill("input[name='maintenance_date']", "2025-12-01")
|
||||
page.fill("textarea[name='description']", maintenance_desc)
|
||||
page.fill("input[name='cost']", "12000")
|
||||
page.click("button[type='submit']")
|
||||
|
||||
with page.expect_response("**/api/maintenance/") as response_info:
|
||||
pass
|
||||
assert response_info.value.status == 201
|
||||
|
||||
# Verify the new item appears in the table.
|
||||
expect(page.locator(f"tr:has-text('{maintenance_desc}')")).to_be_visible()
|
||||
|
||||
# Verify the feedback message.
|
||||
expect(page.locator("#maintenance-feedback")
|
||||
).to_have_text("Maintenance entry saved.")
|
||||
42
tests/e2e/test_production.py
Normal file
42
tests/e2e/test_production.py
Normal file
@@ -0,0 +1,42 @@
|
||||
from uuid import uuid4
|
||||
|
||||
from playwright.sync_api import Page, expect
|
||||
|
||||
|
||||
def test_production_form_loads(page: Page):
|
||||
"""Verify the production form page loads correctly."""
|
||||
page.goto("/ui/production")
|
||||
expect(page).to_have_title("CalMiner Production")
|
||||
expect(page.locator("h1")).to_have_text("Production")
|
||||
|
||||
|
||||
def test_create_production_item(page: Page):
|
||||
"""Test creating a new production item through the UI."""
|
||||
# First, create a scenario to associate the production with.
|
||||
page.goto("/ui/scenarios")
|
||||
scenario_name = f"Production Test Scenario {uuid4()}"
|
||||
page.fill("input[name='name']", scenario_name)
|
||||
page.click("button[type='submit']")
|
||||
with page.expect_response("**/api/scenarios/"):
|
||||
pass # Wait for the scenario to be created
|
||||
|
||||
# Now, navigate to the production page and add an item.
|
||||
page.goto("/ui/production")
|
||||
|
||||
# Create a production item.
|
||||
production_desc = "Ore extracted - Grade A"
|
||||
page.select_option("select[name='scenario_id']", label=scenario_name)
|
||||
page.fill("input[name='description']", production_desc)
|
||||
page.fill("input[name='amount']", "1500")
|
||||
page.click("button[type='submit']")
|
||||
|
||||
with page.expect_response("**/api/production/") as response_info:
|
||||
pass
|
||||
assert response_info.value.status == 201
|
||||
|
||||
# Verify the new item appears in the table.
|
||||
expect(page.locator(f"tr:has-text('{production_desc}')")).to_be_visible()
|
||||
|
||||
# Verify the feedback message.
|
||||
expect(page.locator("#production-feedback")
|
||||
).to_have_text("Production output saved.")
|
||||
8
tests/e2e/test_reporting.py
Normal file
8
tests/e2e/test_reporting.py
Normal file
@@ -0,0 +1,8 @@
|
||||
from playwright.sync_api import Page, expect
|
||||
|
||||
|
||||
def test_reporting_view_loads(page: Page):
|
||||
"""Verify the reporting view page loads correctly."""
|
||||
page.click("a[href='/ui/reporting']")
|
||||
expect(page).to_have_url("/ui/reporting")
|
||||
expect(page.locator("h2:has-text('Reporting')")).to_be_visible()
|
||||
42
tests/e2e/test_scenarios.py
Normal file
42
tests/e2e/test_scenarios.py
Normal file
@@ -0,0 +1,42 @@
|
||||
from uuid import uuid4
|
||||
|
||||
from playwright.sync_api import Page, expect
|
||||
|
||||
|
||||
def test_scenario_form_loads(page: Page):
|
||||
"""Verify the scenario form page loads correctly."""
|
||||
page.goto("/ui/scenarios")
|
||||
expect(page).to_have_url(
|
||||
"http://localhost:8001/ui/scenarios"
|
||||
) # Updated port
|
||||
expect(page.locator("h2:has-text('Create New Scenario')")).to_be_visible()
|
||||
|
||||
|
||||
def test_create_new_scenario(page: Page):
|
||||
"""Test creating a new scenario via the UI form."""
|
||||
page.goto("/ui/scenarios")
|
||||
|
||||
scenario_name = f"E2E Test Scenario {uuid4()}"
|
||||
scenario_desc = "A scenario created during an end-to-end test."
|
||||
|
||||
page.fill("input[name='name']", scenario_name)
|
||||
page.fill("textarea[name='description']", scenario_desc)
|
||||
|
||||
# Expect a network response from the POST request after clicking the submit button.
|
||||
with page.expect_response("**/api/scenarios/") as response_info:
|
||||
page.click("button[type='submit']")
|
||||
|
||||
response = response_info.value
|
||||
assert response.status == 200
|
||||
|
||||
# After a successful submission, the new scenario should be visible in the table.
|
||||
# The table is dynamically updated, so we might need to wait for it to appear.
|
||||
new_row = page.locator(f"tr:has-text('{scenario_name}')")
|
||||
expect(new_row).to_be_visible()
|
||||
expect(new_row.locator("td").nth(1)).to_have_text(scenario_desc)
|
||||
|
||||
# Verify the feedback message.
|
||||
feedback = page.locator("#feedback")
|
||||
expect(feedback).to_be_visible()
|
||||
expect(feedback).to_have_text(
|
||||
f'Scenario "{scenario_name}" created successfully.')
|
||||
29
tests/e2e/test_smoke.py
Normal file
29
tests/e2e/test_smoke.py
Normal file
@@ -0,0 +1,29 @@
|
||||
import pytest
|
||||
from playwright.sync_api import Page, expect
|
||||
|
||||
# A list of UI routes to check, with their URL, expected title, and a key heading text.
|
||||
UI_ROUTES = [
|
||||
("/", "CalMiner Dashboard", "Dashboard"),
|
||||
("/ui/dashboard", "CalMiner Dashboard", "Dashboard"),
|
||||
("/ui/scenarios", "CalMiner Scenarios", "Scenarios"),
|
||||
("/ui/parameters", "CalMiner Parameters", "Parameters"),
|
||||
("/ui/costs", "CalMiner Costs", "Costs"),
|
||||
("/ui/consumption", "CalMiner Consumption", "Consumption"),
|
||||
("/ui/production", "CalMiner Production", "Production"),
|
||||
("/ui/equipment", "CalMiner Equipment", "Equipment"),
|
||||
("/ui/maintenance", "CalMiner Maintenance", "Maintenance"),
|
||||
("/ui/simulations", "CalMiner Simulations", "Simulations"),
|
||||
("/ui/reporting", "CalMiner Reporting", "Reporting"),
|
||||
]
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("live_server")
|
||||
@pytest.mark.parametrize("url, title, heading", UI_ROUTES)
|
||||
def test_ui_pages_load_correctly(page: Page, url: str, title: str, heading: str):
|
||||
"""Verify that all UI pages load with the correct title and a visible heading."""
|
||||
page.goto(url)
|
||||
expect(page).to_have_title(title)
|
||||
# The app uses a mix of h1 and h2 for main page headings.
|
||||
heading_locator = page.locator(
|
||||
f"h1:has-text('{heading}'), h2:has-text('{heading}')")
|
||||
expect(heading_locator.first).to_be_visible()
|
||||
@@ -1,4 +1,6 @@
|
||||
from typing import Generator
|
||||
from datetime import date
|
||||
from typing import Any, Dict, Generator
|
||||
from uuid import uuid4
|
||||
|
||||
import pytest
|
||||
from fastapi.testclient import TestClient
|
||||
@@ -8,6 +10,15 @@ from sqlalchemy.pool import StaticPool
|
||||
|
||||
from config.database import Base
|
||||
from main import app
|
||||
from models.capex import Capex
|
||||
from models.consumption import Consumption
|
||||
from models.equipment import Equipment
|
||||
from models.maintenance import Maintenance
|
||||
from models.opex import Opex
|
||||
from models.parameters import Parameter
|
||||
from models.production_output import ProductionOutput
|
||||
from models.scenario import Scenario
|
||||
from models.simulation_result import SimulationResult
|
||||
|
||||
SQLALCHEMY_TEST_URL = "sqlite:///:memory:"
|
||||
engine = create_engine(
|
||||
@@ -35,6 +46,19 @@ def setup_database() -> Generator[None, None, None]:
|
||||
simulation_result,
|
||||
) # noqa: F401 - imported for side effects
|
||||
|
||||
_ = (
|
||||
capex,
|
||||
consumption,
|
||||
distribution,
|
||||
equipment,
|
||||
maintenance,
|
||||
opex,
|
||||
parameters,
|
||||
production_output,
|
||||
scenario,
|
||||
simulation_result,
|
||||
)
|
||||
|
||||
Base.metadata.create_all(bind=engine)
|
||||
yield
|
||||
Base.metadata.drop_all(bind=engine)
|
||||
@@ -65,3 +89,151 @@ def api_client(db_session: Session) -> Generator[TestClient, None, None]:
|
||||
yield client
|
||||
|
||||
app.dependency_overrides.pop(route_dependencies.get_db, None)
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def seeded_ui_data(db_session: Session) -> Generator[Dict[str, Any], None, None]:
|
||||
"""Populate a scenario with representative related records for UI tests."""
|
||||
scenario_name = f"Scenario Alpha {uuid4()}"
|
||||
scenario = Scenario(name=scenario_name,
|
||||
description="Seeded UI scenario")
|
||||
db_session.add(scenario)
|
||||
db_session.flush()
|
||||
|
||||
parameter = Parameter(
|
||||
scenario_id=scenario.id,
|
||||
name="Ore Grade",
|
||||
value=1.5,
|
||||
distribution_type="normal",
|
||||
distribution_parameters={"mean": 1.5, "std_dev": 0.1},
|
||||
)
|
||||
capex = Capex(
|
||||
scenario_id=scenario.id,
|
||||
amount=1_000_000.0,
|
||||
description="Drill purchase",
|
||||
)
|
||||
opex = Opex(
|
||||
scenario_id=scenario.id,
|
||||
amount=250_000.0,
|
||||
description="Fuel spend",
|
||||
)
|
||||
consumption = Consumption(
|
||||
scenario_id=scenario.id,
|
||||
amount=1_200.0,
|
||||
description="Diesel (L)",
|
||||
)
|
||||
production = ProductionOutput(
|
||||
scenario_id=scenario.id,
|
||||
amount=800.0,
|
||||
description="Ore (tonnes)",
|
||||
)
|
||||
equipment = Equipment(
|
||||
scenario_id=scenario.id,
|
||||
name="Excavator 42",
|
||||
description="Primary loader",
|
||||
)
|
||||
db_session.add_all(
|
||||
[parameter, capex, opex, consumption, production, equipment]
|
||||
)
|
||||
db_session.flush()
|
||||
|
||||
maintenance = Maintenance(
|
||||
scenario_id=scenario.id,
|
||||
equipment_id=equipment.id,
|
||||
maintenance_date=date(2025, 1, 15),
|
||||
description="Hydraulic service",
|
||||
cost=15_000.0,
|
||||
)
|
||||
simulation_results = [
|
||||
SimulationResult(
|
||||
scenario_id=scenario.id,
|
||||
iteration=index,
|
||||
result=value,
|
||||
)
|
||||
for index, value in enumerate((950_000.0, 975_000.0, 990_000.0), start=1)
|
||||
]
|
||||
|
||||
db_session.add(maintenance)
|
||||
db_session.add_all(simulation_results)
|
||||
db_session.commit()
|
||||
|
||||
try:
|
||||
yield {
|
||||
"scenario": scenario,
|
||||
"equipment": equipment,
|
||||
"simulation_results": simulation_results,
|
||||
}
|
||||
finally:
|
||||
db_session.query(SimulationResult).filter_by(
|
||||
scenario_id=scenario.id
|
||||
).delete()
|
||||
db_session.query(Maintenance).filter_by(
|
||||
scenario_id=scenario.id
|
||||
).delete()
|
||||
db_session.query(Equipment).filter_by(id=equipment.id).delete()
|
||||
db_session.query(ProductionOutput).filter_by(
|
||||
scenario_id=scenario.id
|
||||
).delete()
|
||||
db_session.query(Consumption).filter_by(
|
||||
scenario_id=scenario.id
|
||||
).delete()
|
||||
db_session.query(Opex).filter_by(scenario_id=scenario.id).delete()
|
||||
db_session.query(Capex).filter_by(scenario_id=scenario.id).delete()
|
||||
db_session.query(Parameter).filter_by(scenario_id=scenario.id).delete()
|
||||
db_session.query(Scenario).filter_by(id=scenario.id).delete()
|
||||
db_session.commit()
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def invalid_request_payloads(db_session: Session) -> Generator[Dict[str, Any], None, None]:
|
||||
"""Provide reusable invalid request bodies for exercising validation branches."""
|
||||
duplicate_name = f"Scenario Duplicate {uuid4()}"
|
||||
existing = Scenario(name=duplicate_name,
|
||||
description="Existing scenario for duplicate checks")
|
||||
db_session.add(existing)
|
||||
db_session.commit()
|
||||
|
||||
payloads: Dict[str, Any] = {
|
||||
"existing_scenario": existing,
|
||||
"scenario_duplicate": {
|
||||
"name": duplicate_name,
|
||||
"description": "Second scenario should fail with duplicate name",
|
||||
},
|
||||
"parameter_missing_scenario": {
|
||||
"scenario_id": existing.id + 99,
|
||||
"name": "Invalid Parameter",
|
||||
"value": 1.0,
|
||||
},
|
||||
"parameter_invalid_distribution": {
|
||||
"scenario_id": existing.id,
|
||||
"name": "Weird Dist",
|
||||
"value": 2.5,
|
||||
"distribution_type": "invalid",
|
||||
},
|
||||
"simulation_unknown_scenario": {
|
||||
"scenario_id": existing.id + 99,
|
||||
"iterations": 10,
|
||||
"parameters": [
|
||||
{"name": "grade", "value": 1.2, "distribution": "normal"}
|
||||
],
|
||||
},
|
||||
"simulation_missing_parameters": {
|
||||
"scenario_id": existing.id,
|
||||
"iterations": 5,
|
||||
"parameters": [],
|
||||
},
|
||||
"reporting_non_list_payload": {"result": 10.0},
|
||||
"reporting_missing_result": [{"value": 12.0}],
|
||||
"maintenance_negative_cost": {
|
||||
"equipment_id": 1,
|
||||
"scenario_id": existing.id,
|
||||
"maintenance_date": "2025-01-15",
|
||||
"cost": -500.0,
|
||||
},
|
||||
}
|
||||
|
||||
try:
|
||||
yield payloads
|
||||
finally:
|
||||
db_session.query(Scenario).filter_by(id=existing.id).delete()
|
||||
db_session.commit()
|
||||
|
||||
@@ -49,6 +49,32 @@ def test_generate_report_with_values():
|
||||
assert math.isclose(float(report["expected_shortfall_95"]), 10.0)
|
||||
|
||||
|
||||
def test_generate_report_single_value():
|
||||
report = generate_report([
|
||||
{"iteration": 1, "result": 42.0},
|
||||
])
|
||||
assert report["count"] == 1
|
||||
assert report["std_dev"] == 0.0
|
||||
assert report["variance"] == 0.0
|
||||
assert report["percentile_10"] == 42.0
|
||||
assert report["expected_shortfall_95"] == 42.0
|
||||
|
||||
|
||||
def test_generate_report_ignores_invalid_entries():
|
||||
raw_values: List[Any] = [
|
||||
{"iteration": 1, "result": 10.0},
|
||||
"not-a-mapping",
|
||||
{"iteration": 2},
|
||||
{"iteration": 3, "result": None},
|
||||
{"iteration": 4, "result": 20},
|
||||
]
|
||||
report = generate_report(raw_values)
|
||||
assert report["count"] == 2
|
||||
assert math.isclose(float(report["mean"]), 15.0)
|
||||
assert math.isclose(float(report["min"]), 10.0)
|
||||
assert math.isclose(float(report["max"]), 20.0)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def client(api_client: TestClient) -> TestClient:
|
||||
return api_client
|
||||
|
||||
95
tests/unit/test_router_validation.py
Normal file
95
tests/unit/test_router_validation.py
Normal file
@@ -0,0 +1,95 @@
|
||||
from typing import Any, Dict
|
||||
|
||||
import pytest
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("invalid_request_payloads")
|
||||
def test_duplicate_scenario_returns_400(
|
||||
api_client: TestClient, invalid_request_payloads: Dict[str, Any]
|
||||
) -> None:
|
||||
payload = invalid_request_payloads["scenario_duplicate"]
|
||||
response = api_client.post("/api/scenarios/", json=payload)
|
||||
assert response.status_code == 400
|
||||
body = response.json()
|
||||
assert body["detail"] == "Scenario already exists"
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("invalid_request_payloads")
|
||||
def test_parameter_create_missing_scenario_returns_404(
|
||||
api_client: TestClient, invalid_request_payloads: Dict[str, Any]
|
||||
) -> None:
|
||||
payload = invalid_request_payloads["parameter_missing_scenario"]
|
||||
response = api_client.post("/api/parameters/", json=payload)
|
||||
assert response.status_code == 404
|
||||
assert response.json()["detail"] == "Scenario not found"
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("invalid_request_payloads")
|
||||
def test_parameter_create_invalid_distribution_is_422(
|
||||
api_client: TestClient
|
||||
) -> None:
|
||||
response = api_client.post(
|
||||
"/api/parameters/",
|
||||
json={
|
||||
"scenario_id": 1,
|
||||
"name": "Bad Dist",
|
||||
"value": 2.0,
|
||||
"distribution_type": "invalid",
|
||||
},
|
||||
)
|
||||
assert response.status_code == 422
|
||||
errors = response.json()["detail"]
|
||||
assert any("distribution_type" in err["loc"] for err in errors)
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("invalid_request_payloads")
|
||||
def test_simulation_unknown_scenario_returns_404(
|
||||
api_client: TestClient, invalid_request_payloads: Dict[str, Any]
|
||||
) -> None:
|
||||
payload = invalid_request_payloads["simulation_unknown_scenario"]
|
||||
response = api_client.post("/api/simulations/run", json=payload)
|
||||
assert response.status_code == 404
|
||||
assert response.json()["detail"] == "Scenario not found"
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("invalid_request_payloads")
|
||||
def test_simulation_missing_parameters_returns_400(
|
||||
api_client: TestClient, invalid_request_payloads: Dict[str, Any]
|
||||
) -> None:
|
||||
payload = invalid_request_payloads["simulation_missing_parameters"]
|
||||
response = api_client.post("/api/simulations/run", json=payload)
|
||||
assert response.status_code == 400
|
||||
assert response.json()["detail"] == "No parameters provided"
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("invalid_request_payloads")
|
||||
def test_reporting_summary_rejects_non_list_payload(
|
||||
api_client: TestClient, invalid_request_payloads: Dict[str, Any]
|
||||
) -> None:
|
||||
payload = invalid_request_payloads["reporting_non_list_payload"]
|
||||
response = api_client.post("/api/reporting/summary", json=payload)
|
||||
assert response.status_code == 400
|
||||
assert response.json()["detail"] == "Invalid input format"
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("invalid_request_payloads")
|
||||
def test_reporting_summary_requires_result_field(
|
||||
api_client: TestClient, invalid_request_payloads: Dict[str, Any]
|
||||
) -> None:
|
||||
payload = invalid_request_payloads["reporting_missing_result"]
|
||||
response = api_client.post("/api/reporting/summary", json=payload)
|
||||
assert response.status_code == 400
|
||||
assert "must include numeric 'result'" in response.json()["detail"]
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("invalid_request_payloads")
|
||||
def test_maintenance_negative_cost_rejected_by_schema(
|
||||
api_client: TestClient, invalid_request_payloads: Dict[str, Any]
|
||||
) -> None:
|
||||
payload = invalid_request_payloads["maintenance_negative_cost"]
|
||||
response = api_client.post("/api/maintenance/", json=payload)
|
||||
assert response.status_code == 422
|
||||
error_locations = [tuple(item["loc"])
|
||||
for item in response.json()["detail"]]
|
||||
assert ("body", "cost") in error_locations
|
||||
@@ -4,6 +4,8 @@ import pytest
|
||||
from fastapi.testclient import TestClient
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
from typing import Any, Dict, List
|
||||
|
||||
from models.simulation_result import SimulationResult
|
||||
from services.simulation import run_simulation
|
||||
|
||||
@@ -14,7 +16,7 @@ def client(api_client: TestClient) -> TestClient:
|
||||
|
||||
|
||||
def test_run_simulation_function_generates_samples():
|
||||
params = [
|
||||
params: List[Dict[str, Any]] = [
|
||||
{"name": "grade", "value": 1.8, "distribution": "normal", "std_dev": 0.2},
|
||||
{
|
||||
"name": "recovery",
|
||||
@@ -30,8 +32,73 @@ def test_run_simulation_function_generates_samples():
|
||||
assert results[0]["iteration"] == 1
|
||||
|
||||
|
||||
def test_run_simulation_with_zero_iterations_returns_empty():
|
||||
params: List[Dict[str, Any]] = [
|
||||
{"name": "grade", "value": 1.2, "distribution": "normal"}
|
||||
]
|
||||
results = run_simulation(params, iterations=0)
|
||||
assert results == []
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"parameter_payload,error_message",
|
||||
[
|
||||
({"name": "missing-value"}, "Parameter at index 0 must include 'value'"),
|
||||
(
|
||||
{
|
||||
"name": "bad-dist",
|
||||
"value": 1.0,
|
||||
"distribution": "unsupported",
|
||||
},
|
||||
"Parameter 'bad-dist' has unsupported distribution 'unsupported'",
|
||||
),
|
||||
(
|
||||
{
|
||||
"name": "uniform-range",
|
||||
"value": 1.0,
|
||||
"distribution": "uniform",
|
||||
"min": 5,
|
||||
"max": 5,
|
||||
},
|
||||
"Parameter 'uniform-range' requires 'min' < 'max' for uniform distribution",
|
||||
),
|
||||
(
|
||||
{
|
||||
"name": "triangular-mode",
|
||||
"value": 5.0,
|
||||
"distribution": "triangular",
|
||||
"min": 1,
|
||||
"max": 3,
|
||||
"mode": 5,
|
||||
},
|
||||
"Parameter 'triangular-mode' mode must be within min/max bounds for triangular distribution",
|
||||
),
|
||||
],
|
||||
)
|
||||
def test_run_simulation_parameter_validation_errors(
|
||||
parameter_payload: Dict[str, Any], error_message: str
|
||||
) -> None:
|
||||
with pytest.raises(ValueError) as exc:
|
||||
run_simulation([parameter_payload])
|
||||
assert str(exc.value) == error_message
|
||||
|
||||
|
||||
def test_run_simulation_normal_std_dev_fallback():
|
||||
params: List[Dict[str, Any]] = [
|
||||
{
|
||||
"name": "std-dev-fallback",
|
||||
"value": 10.0,
|
||||
"distribution": "normal",
|
||||
"std_dev": 0,
|
||||
}
|
||||
]
|
||||
results = run_simulation(params, iterations=3, seed=99)
|
||||
assert len(results) == 3
|
||||
assert all("result" in entry for entry in results)
|
||||
|
||||
|
||||
def test_simulation_endpoint_no_params(client: TestClient):
|
||||
scenario_payload = {
|
||||
scenario_payload: Dict[str, Any] = {
|
||||
"name": f"NoParamScenario-{uuid4()}",
|
||||
"description": "No parameters run",
|
||||
}
|
||||
@@ -50,7 +117,7 @@ def test_simulation_endpoint_no_params(client: TestClient):
|
||||
def test_simulation_endpoint_success(
|
||||
client: TestClient, db_session: Session
|
||||
):
|
||||
scenario_payload = {
|
||||
scenario_payload: Dict[str, Any] = {
|
||||
"name": f"SimScenario-{uuid4()}",
|
||||
"description": "Simulation test",
|
||||
}
|
||||
@@ -58,10 +125,10 @@ def test_simulation_endpoint_success(
|
||||
assert scenario_resp.status_code == 200
|
||||
scenario_id = scenario_resp.json()["id"]
|
||||
|
||||
params = [
|
||||
params: List[Dict[str, Any]] = [
|
||||
{"name": "param1", "value": 2.5, "distribution": "normal", "std_dev": 0.5}
|
||||
]
|
||||
payload = {
|
||||
payload: Dict[str, Any] = {
|
||||
"scenario_id": scenario_id,
|
||||
"parameters": params,
|
||||
"iterations": 10,
|
||||
@@ -85,7 +152,7 @@ def test_simulation_endpoint_success(
|
||||
|
||||
|
||||
def test_simulation_endpoint_uses_stored_parameters(client: TestClient):
|
||||
scenario_payload = {
|
||||
scenario_payload: Dict[str, Any] = {
|
||||
"name": f"StoredParams-{uuid4()}",
|
||||
"description": "Stored parameter simulation",
|
||||
}
|
||||
@@ -93,7 +160,7 @@ def test_simulation_endpoint_uses_stored_parameters(client: TestClient):
|
||||
assert scenario_resp.status_code == 200
|
||||
scenario_id = scenario_resp.json()["id"]
|
||||
|
||||
parameter_payload = {
|
||||
parameter_payload: Dict[str, Any] = {
|
||||
"scenario_id": scenario_id,
|
||||
"name": "grade",
|
||||
"value": 1.5,
|
||||
|
||||
100
tests/unit/test_ui_routes.py
Normal file
100
tests/unit/test_ui_routes.py
Normal file
@@ -0,0 +1,100 @@
|
||||
from typing import Any, Dict, cast
|
||||
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
from models.scenario import Scenario
|
||||
|
||||
|
||||
def test_dashboard_route_provides_summary(
|
||||
api_client: TestClient, seeded_ui_data: Dict[str, Any]
|
||||
) -> None:
|
||||
response = api_client.get("/ui/dashboard")
|
||||
assert response.status_code == 200
|
||||
|
||||
template = getattr(response, "template", None)
|
||||
assert template is not None
|
||||
assert template.name == "Dashboard.html"
|
||||
|
||||
context = cast(Dict[str, Any], getattr(response, "context", {}))
|
||||
assert context.get("report_available") is True
|
||||
|
||||
metric_labels = {item["label"] for item in context["summary_metrics"]}
|
||||
assert {"CAPEX Total", "OPEX Total", "Production", "Simulation Iterations"}.issubset(metric_labels)
|
||||
|
||||
scenario = cast(Scenario, seeded_ui_data["scenario"])
|
||||
scenario_row = next(
|
||||
row for row in context["scenario_rows"] if row["scenario_name"] == scenario.name
|
||||
)
|
||||
assert scenario_row["iterations"] == 3
|
||||
assert scenario_row["simulation_mean_display"] == "971,666.67"
|
||||
assert scenario_row["capex_display"] == "$1,000,000.00"
|
||||
assert scenario_row["opex_display"] == "$250,000.00"
|
||||
assert scenario_row["production_display"] == "800.00"
|
||||
assert scenario_row["consumption_display"] == "1,200.00"
|
||||
|
||||
|
||||
def test_scenarios_route_lists_seeded_scenario(
|
||||
api_client: TestClient, seeded_ui_data: Dict[str, Any]
|
||||
) -> None:
|
||||
response = api_client.get("/ui/scenarios")
|
||||
assert response.status_code == 200
|
||||
|
||||
template = getattr(response, "template", None)
|
||||
assert template is not None
|
||||
assert template.name == "ScenarioForm.html"
|
||||
|
||||
context = cast(Dict[str, Any], getattr(response, "context", {}))
|
||||
names = [item["name"] for item in context["scenarios"]]
|
||||
scenario = cast(Scenario, seeded_ui_data["scenario"])
|
||||
assert scenario.name in names
|
||||
|
||||
|
||||
def test_reporting_route_includes_summary(
|
||||
api_client: TestClient, seeded_ui_data: Dict[str, Any]
|
||||
) -> None:
|
||||
response = api_client.get("/ui/reporting")
|
||||
assert response.status_code == 200
|
||||
|
||||
template = getattr(response, "template", None)
|
||||
assert template is not None
|
||||
assert template.name == "reporting.html"
|
||||
|
||||
context = cast(Dict[str, Any], getattr(response, "context", {}))
|
||||
summaries = context["report_summaries"]
|
||||
scenario = cast(Scenario, seeded_ui_data["scenario"])
|
||||
scenario_summary = next(
|
||||
item for item in summaries if item["scenario_id"] == scenario.id
|
||||
)
|
||||
assert scenario_summary["iterations"] == 3
|
||||
mean_value = float(scenario_summary["summary"]["mean"])
|
||||
assert abs(mean_value - 971_666.6666666666) < 1e-6
|
||||
|
||||
|
||||
def test_dashboard_data_endpoint_returns_aggregates(
|
||||
api_client: TestClient, seeded_ui_data: Dict[str, Any]
|
||||
) -> None:
|
||||
response = api_client.get("/ui/dashboard/data")
|
||||
assert response.status_code == 200
|
||||
|
||||
payload = response.json()
|
||||
assert payload["report_available"] is True
|
||||
|
||||
metric_map = {item["label"]: item["value"] for item in payload["summary_metrics"]}
|
||||
assert metric_map["CAPEX Total"].startswith("$")
|
||||
assert metric_map["Maintenance Cost"].startswith("$")
|
||||
|
||||
scenario = cast(Scenario, seeded_ui_data["scenario"])
|
||||
scenario_rows = payload["scenario_rows"]
|
||||
scenario_entry = next(
|
||||
row for row in scenario_rows if row["scenario_name"] == scenario.name
|
||||
)
|
||||
assert scenario_entry["capex_display"] == "$1,000,000.00"
|
||||
assert scenario_entry["production_display"] == "800.00"
|
||||
|
||||
labels = payload["scenario_cost_chart"]["labels"]
|
||||
idx = labels.index(scenario.name)
|
||||
assert payload["scenario_cost_chart"]["capex"][idx] == 1_000_000.0
|
||||
|
||||
activity_labels = payload["scenario_activity_chart"]["labels"]
|
||||
activity_idx = activity_labels.index(scenario.name)
|
||||
assert payload["scenario_activity_chart"]["production"][activity_idx] == 800.0
|
||||
28
tests/unit/test_validation.py
Normal file
28
tests/unit/test_validation.py
Normal file
@@ -0,0 +1,28 @@
|
||||
from uuid import uuid4
|
||||
|
||||
import pytest
|
||||
from fastapi import HTTPException
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
|
||||
def test_validate_json_allows_valid_payload(api_client: TestClient) -> None:
|
||||
payload = {
|
||||
"name": f"ValidJSON-{uuid4()}",
|
||||
"description": "Middleware should allow valid JSON.",
|
||||
}
|
||||
response = api_client.post("/api/scenarios/", json=payload)
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["name"] == payload["name"]
|
||||
|
||||
|
||||
def test_validate_json_rejects_invalid_payload(api_client: TestClient) -> None:
|
||||
with pytest.raises(HTTPException) as exc_info:
|
||||
api_client.post(
|
||||
"/api/scenarios/",
|
||||
content=b"{not valid json",
|
||||
headers={"Content-Type": "application/json"},
|
||||
)
|
||||
|
||||
assert exc_info.value.status_code == 400
|
||||
assert exc_info.value.detail == "Invalid JSON payload"
|
||||
Reference in New Issue
Block a user