Files
calminer/docs/testing.md
zwitschi f020d276bc Enhance testing framework and UI feedback
- Updated architecture documentation to include details on UI rendering checks and Playwright end-to-end tests.
- Revised testing documentation to specify Playwright for frontend E2E tests and added details on running tests.
- Implemented feedback mechanism in scenario form for successful creation notifications.
- Added feedback div in ScenarioForm.html for user notifications.
- Created new fixtures for Playwright tests to manage server and browser instances.
- Developed comprehensive E2E tests for consumption, costs, equipment, maintenance, production, and scenarios.
- Added smoke tests to verify UI page loading and form submissions.
- Enhanced unit tests for simulation and validation, including new tests for report generation and validation errors.
- Created new test files for router validation to ensure consistent error handling.
- Established a new test suite for UI routes to validate dashboard and reporting functionalities.
- Implemented validation tests to ensure proper handling of JSON payloads.
2025-10-21 08:29:11 +02:00

4.2 KiB

Testing Strategy

Overview

CalMiner will use a combination of unit, integration, and end-to-end tests to ensure quality.

Frameworks

  • Backend: pytest for unit and integration tests.
  • Frontend: pytest with Playwright for E2E tests.
  • Database: pytest fixtures with psycopg2 for DB tests.

Test Types

  • Unit Tests: Test individual functions/modules.
  • Integration Tests: Test API endpoints and DB interactions.
  • E2E Tests: Playwright for full user flows.

CI/CD

  • Use GitHub Actions for CI.
  • Run tests on pull requests.
  • Code coverage target: 80% (using pytest-cov).

Running Tests

  • Unit: pytest tests/unit/
  • E2E: pytest tests/e2e/
  • All: pytest

Test Directory Structure

Organize tests under the tests/ directory mirroring the application structure:

tests/
  unit/
    test_<module>.py
  e2e/
    test_<flow>.py
  fixtures/
    conftest.py

Writing Tests

  • Name tests with the test_ prefix.
  • Group related tests in classes or modules.
  • Use descriptive assertion messages.

Fixtures and Test Data

  • Define reusable fixtures in tests/fixtures/conftest.py.
  • Use temporary in-memory databases or isolated schemas for DB tests.
  • Load sample data via fixtures for consistent test environments.
  • Leverage the seeded_ui_data fixture in tests/unit/conftest.py to populate scenarios with related cost, maintenance, and simulation records for deterministic UI route checks.
  • Use tests/unit/test_ui_routes.py to verify that /ui/dashboard, /ui/scenarios, and /ui/reporting render expected context and that /ui/dashboard/data emits aggregated JSON payloads.
  • Use tests/unit/test_router_validation.py to exercise request validation branches for scenario creation, parameter distribution rules, simulation inputs, reporting summaries, and maintenance costs.

E2E (Playwright) Tests

The E2E test suite, located in tests/e2e/, uses Playwright to simulate user interactions in a live browser environment. These tests are designed to catch issues in the UI, frontend-backend integration, and overall application flow.

Fixtures

  • live_server: A session-scoped fixture that launches the FastAPI application in a separate process, making it accessible to the browser.
  • playwright_instance, browser, page: Standard pytest-playwright fixtures for managing the Playwright instance, browser, and individual pages.

Smoke Tests

  • UI Page Loading: test_smoke.py contains a parameterized test that systematically navigates to all UI routes to ensure they load without errors, have the correct title, and display a primary heading.
  • Form Submissions: Each major form in the application has a corresponding test file (e.g., test_scenarios.py, test_costs.py) that verifies:
    • The form page loads correctly.
    • A new item can be created by filling out and submitting the form.
    • The application provides immediate visual feedback (e.g., a success message).
    • The UI is dynamically updated to reflect the new item (e.g., a new row in a table).

Running E2E Tests

To run the Playwright tests, use the following command:

pytest tests/e2e/

To run the tests in headed mode and observe the browser interactions, use:

pytest tests/e2e/ --headed

Mocking and Dependency Injection

  • Use unittest.mock to mock external dependencies.
  • Inject dependencies via function parameters or FastAPI's dependency overrides in tests.

Code Coverage

  • Install pytest-cov to generate coverage reports.
  • Run with coverage: pytest --cov --cov-report=term for quick baselines (use --cov-report=html when visualizing hotspots).
  • Target 95%+ overall coverage. Focus on historically low modules: services/simulation.py, services/reporting.py, middleware/validation.py, and routes/ui.py.
  • Recent additions include unit tests that validate Monte Carlo parameter errors, reporting fallbacks, and JSON middleware rejection paths to guard against malformed inputs.

CI Integration

  • Configure GitHub Actions workflow in .github/workflows/ci.yml to:
    • Install dependencies, including Playwright browsers (playwright install).
    • Run pytest with coverage for unit tests.
    • Run pytest tests/e2e/ for E2E tests.
    • Fail on coverage <80%.
    • Upload coverage artifact.