- Updated the `_render` function in `ui.py` to correctly pass the request object to `TemplateResponse`. - Initialized `upcoming_maintenance` as a typed list in `_load_dashboard` for better type safety. - Added new unit tests in `test_simulation.py` to cover triangular sampling and uniform distribution defaults. - Implemented a test to ensure that running the simulation without parameters returns an empty result. - Created a parameterized test in `test_ui_routes.py` to verify that additional UI routes render the correct templates and context.
114 lines
4.6 KiB
Markdown
114 lines
4.6 KiB
Markdown
# Testing Strategy
|
|
|
|
## Overview
|
|
|
|
CalMiner will use a combination of unit, integration, and end-to-end tests to ensure quality.
|
|
|
|
## Frameworks
|
|
|
|
- **Backend**: pytest for unit and integration tests.
|
|
- **Frontend**: pytest with Playwright for E2E tests.
|
|
- **Database**: pytest fixtures with psycopg2 for DB tests.
|
|
|
|
## Test Types
|
|
|
|
- **Unit Tests**: Test individual functions/modules.
|
|
- **Integration Tests**: Test API endpoints and DB interactions.
|
|
- **E2E Tests**: Playwright for full user flows.
|
|
|
|
## CI/CD
|
|
|
|
- Use GitHub Actions for CI.
|
|
- Run tests on pull requests.
|
|
- Code coverage target: 80% (using pytest-cov).
|
|
|
|
## Running Tests
|
|
|
|
- Unit: `pytest tests/unit/`
|
|
- E2E: `pytest tests/e2e/`
|
|
- All: `pytest`
|
|
|
|
## Test Directory Structure
|
|
|
|
Organize tests under the `tests/` directory mirroring the application structure:
|
|
|
|
```text
|
|
tests/
|
|
unit/
|
|
test_<module>.py
|
|
e2e/
|
|
test_<flow>.py
|
|
fixtures/
|
|
conftest.py
|
|
```
|
|
|
|
## Writing Tests
|
|
|
|
- Name tests with the `test_` prefix.
|
|
- Group related tests in classes or modules.
|
|
- Use descriptive assertion messages.
|
|
|
|
## Fixtures and Test Data
|
|
|
|
- Define reusable fixtures in `tests/fixtures/conftest.py`.
|
|
- Use temporary in-memory databases or isolated schemas for DB tests.
|
|
- Load sample data via fixtures for consistent test environments.
|
|
- Leverage the `seeded_ui_data` fixture in `tests/unit/conftest.py` to populate scenarios with related cost, maintenance, and simulation records for deterministic UI route checks.
|
|
- Use `tests/unit/test_ui_routes.py` to verify that `/ui/dashboard`, `/ui/scenarios`, and `/ui/reporting` render expected context and that `/ui/dashboard/data` emits aggregated JSON payloads.
|
|
- Use `tests/unit/test_router_validation.py` to exercise request validation branches for scenario creation, parameter distribution rules, simulation inputs, reporting summaries, and maintenance costs.
|
|
|
|
## E2E (Playwright) Tests
|
|
|
|
The E2E test suite, located in `tests/e2e/`, uses Playwright to simulate user interactions in a live browser environment. These tests are designed to catch issues in the UI, frontend-backend integration, and overall application flow.
|
|
|
|
### Fixtures
|
|
|
|
- `live_server`: A session-scoped fixture that launches the FastAPI application in a separate process, making it accessible to the browser.
|
|
- `playwright_instance`, `browser`, `page`: Standard `pytest-playwright` fixtures for managing the Playwright instance, browser, and individual pages.
|
|
|
|
### Smoke Tests
|
|
|
|
- **UI Page Loading**: `test_smoke.py` contains a parameterized test that systematically navigates to all UI routes to ensure they load without errors, have the correct title, and display a primary heading.
|
|
- **Form Submissions**: Each major form in the application has a corresponding test file (e.g., `test_scenarios.py`, `test_costs.py`) that verifies:
|
|
- The form page loads correctly.
|
|
- A new item can be created by filling out and submitting the form.
|
|
- The application provides immediate visual feedback (e.g., a success message).
|
|
- The UI is dynamically updated to reflect the new item (e.g., a new row in a table).
|
|
|
|
### Running E2E Tests
|
|
|
|
To run the Playwright tests, use the following command:
|
|
|
|
```bash
|
|
pytest tests/e2e/
|
|
```
|
|
|
|
To run the tests in headed mode and observe the browser interactions, use:
|
|
|
|
```bash
|
|
pytest tests/e2e/ --headed
|
|
```
|
|
|
|
## Mocking and Dependency Injection
|
|
|
|
- Use `unittest.mock` to mock external dependencies.
|
|
- Inject dependencies via function parameters or FastAPI's dependency overrides in tests.
|
|
|
|
## Code Coverage
|
|
|
|
- Install `pytest-cov` to generate coverage reports.
|
|
- Run with coverage: `pytest --cov --cov-report=term` for quick baselines (use `--cov-report=html` when visualizing hotspots).
|
|
- Target 95%+ overall coverage. Focus on historically low modules: `services/simulation.py`, `services/reporting.py`, `middleware/validation.py`, and `routes/ui.py`.
|
|
- Recent additions include unit tests that validate Monte Carlo parameter errors, reporting fallbacks, and JSON middleware rejection paths to guard against malformed inputs.
|
|
- Latest snapshot (2025-10-21): `pytest --cov=. --cov-report=term-missing` returns **91%** overall coverage after achieving full coverage in `routes/ui.py` and `services/simulation.py`.
|
|
- Archive coverage artifacts by running `pytest --cov=. --cov-report=xml:reports/coverage/coverage-2025-10-21.xml --cov-report=term-missing`; the generated XML lives under `reports/coverage/` for CI uploads or historical comparisons.
|
|
|
|
## CI Integration
|
|
|
|
- Configure GitHub Actions workflow in `.github/workflows/ci.yml` to:
|
|
- Install dependencies, including Playwright browsers (`playwright install`).
|
|
- Run `pytest` with coverage for unit tests.
|
|
- Run `pytest tests/e2e/` for E2E tests.
|
|
- Fail on coverage <80%.
|
|
- Upload coverage artifact.
|