116 Commits

Author SHA1 Message Date
4e60168837 Merge https://git.allucanget.biz/allucanget/calminer into develop
All checks were successful
CI / lint (push) Successful in 16s
CI / lint (pull_request) Successful in 16s
CI / test (push) Successful in 1m4s
CI / test (pull_request) Successful in 1m2s
CI / build (push) Successful in 1m49s
CI / build (pull_request) Successful in 1m51s
2025-11-14 20:32:03 +01:00
dae3b59af9 feat(ci): add Kubernetes deployment toggle and update conditions for deployment steps
All checks were successful
CI / lint (push) Successful in 16s
CI / test (push) Successful in 1m3s
CI / build (push) Successful in 1m53s
CI / lint (pull_request) Successful in 16s
CI / test (pull_request) Successful in 1m3s
CI / build (pull_request) Successful in 1m51s
2025-11-14 20:14:53 +01:00
839399363e fix(ci): update registry handling and add image push step in CI workflow
All checks were successful
CI / lint (push) Successful in 16s
CI / test (push) Successful in 1m4s
CI / build (push) Successful in 1m45s
2025-11-14 20:08:26 +01:00
fa8a065138 feat(ci): enhance CI workflow with metadata outputs and add Coolify deployment workflow
All checks were successful
CI / lint (push) Successful in 16s
CI / test (push) Successful in 1m3s
CI / build (push) Successful in 1m48s
2025-11-14 19:55:06 +01:00
cd0c0ab416 fix(ci-build): update conditions for push permissions in CI workflow
Some checks failed
CI / lint (push) Failing after 1s
CI / test (push) Has been skipped
CI / build (push) Has been skipped
2025-11-14 19:21:48 +01:00
854b1ac713 Merge pull request 'feat:v2' (#12) from develop into main
All checks were successful
CI / lint (push) Successful in 16s
CI / test (push) Successful in 1m3s
CI / build (push) Successful in 2m17s
Reviewed-on: #12
2025-11-14 18:02:54 +01:00
25fd13ce69 Merge branch 'main' into develop
All checks were successful
CI / lint (push) Successful in 16s
CI / lint (pull_request) Successful in 16s
CI / test (push) Successful in 1m3s
CI / build (push) Successful in 1m56s
CI / test (pull_request) Successful in 1m3s
CI / build (pull_request) Successful in 1m51s
2025-11-14 18:02:43 +01:00
0fec805db1 Delete templates/dashboard.html
Some checks failed
CI / build (push) Has been cancelled
CI / test (push) Has been cancelled
2025-11-14 18:02:33 +01:00
3746062819 chore: remove cicache workflow file
All checks were successful
CI / lint (push) Successful in 17s
CI / test (push) Successful in 1m3s
CI / build (push) Successful in 1m54s
CI / lint (pull_request) Successful in 15s
CI / test (pull_request) Successful in 1m2s
CI / build (pull_request) Successful in 1m46s
2025-11-14 16:34:17 +01:00
958c165721 chore: add .gitattributes for text handling and line endings
All checks were successful
CI / lint (push) Successful in 16s
CI / test (push) Successful in 1m4s
CI / build (push) Successful in 1m56s
CI / deploy (push) Has been skipped
2025-11-14 14:21:16 +01:00
6e835c83eb fix(Dockerfile): implement fallback mechanisms for apt update and install
All checks were successful
CI / lint (push) Successful in 16s
CI / test (push) Successful in 1m2s
CI / build (push) Successful in 1m49s
CI / deploy (push) Has been skipped
2025-11-14 14:12:02 +01:00
75924fca84 feat(ci): add CI workflows for linting, testing, and building
Some checks failed
CI / lint (push) Successful in 15s
CI / test (push) Successful in 1m2s
CI / build (push) Failing after 29s
CI / deploy (push) Has been skipped
2025-11-14 13:45:10 +01:00
ac9ffddbde fix(ci): downgrade upload-artifact action to v3 for compatibility
Some checks failed
CI / build (push) Failing after 41s
CI / deploy (push) Has been skipped
CI / lint (push) Successful in 15s
CI / test (push) Successful in 1m12s
2025-11-14 13:31:26 +01:00
4e5a4c645d chore: remove Playwright installation steps from CI workflow
Some checks failed
CI / lint (push) Successful in 15s
CI / test (push) Failing after 1m2s
CI / build (push) Has been skipped
CI / deploy (push) Has been skipped
2025-11-14 13:26:33 +01:00
e9678b6736 chore: remove CI workflow file and update test files for improved structure and functionality
Some checks failed
CI / lint (push) Successful in 15s
CI / test (push) Failing after 16s
CI / build (push) Has been skipped
CI / deploy (push) Has been skipped
2025-11-14 13:25:02 +01:00
e5e346b26a Update templates/dashboard.html
Some checks failed
CI / build (push) Has been skipped
CI / test (push) Failing after 17s
CI / deploy (push) Has been skipped
CI / lint (push) Successful in 16s
2025-11-14 13:11:08 +01:00
b0e623d68e fix(tests): use secure token generation for access token in navigation client
Some checks failed
CI / lint (push) Successful in 15s
CI / build (push) Has been skipped
CI / test (push) Failing after 18s
CI / deploy (push) Has been skipped
2025-11-14 13:08:09 +01:00
30dbc13fae fix(init_db): correct SQL syntax for navigation link insertion
Some checks failed
CI / test (push) Has been skipped
CI / build (push) Has been skipped
CI / lint (push) Failing after 15s
CI / deploy (push) Has been skipped
2025-11-14 12:51:48 +01:00
31b9a1058a refactor: remove unused imports and streamline code in calculations and navigation services
Some checks failed
CI / test (push) Has been skipped
CI / build (push) Has been skipped
CI / lint (push) Failing after 14s
CI / deploy (push) Has been skipped
2025-11-14 12:28:48 +01:00
bcd993d57c feat(changelog): document completion of UI alignment initiative and style consolidation
Some checks failed
CI / test (push) Has been skipped
CI / build (push) Has been skipped
CI / lint (push) Failing after 15s
CI / deploy (push) Has been skipped
2025-11-13 22:34:31 +01:00
1262a4a63f Refactor CSS styles and introduce theme variables
- Removed redundant CSS rules and consolidated styles across dashboard, forms, imports, projects, and scenarios.
- Introduced new color variables in theme-default.css for better maintainability and consistency.
- Updated existing styles to utilize new color variables, enhancing the overall design.
- Improved responsiveness and layout of various components, including tables and cards.
- Ensured consistent styling for buttons, links, and headers across the application.
2025-11-13 22:30:58 +01:00
fb6816de00 Add form styles and update button classes for consistency
- Introduced a new CSS file for form styles (forms.css) to enhance form layout and design.
- Removed deprecated button styles from imports.css and updated button classes across templates to use the new utility classes.
- Updated various templates to reflect the new button styles, ensuring a consistent look and feel throughout the application.
- Refactored form-related styles in main.css and removed redundant styles from projects.css and scenarios.css.
- Ensured responsive design adjustments for form actions in smaller viewports.
2025-11-13 21:18:32 +01:00
4d0e1a9989 feat(navigation): Enhance navigation links and add legacy route redirects
Some checks failed
CI / test (push) Has been skipped
CI / build (push) Has been skipped
CI / lint (push) Failing after 14s
CI / deploy (push) Has been skipped
- Updated navigation links in `init_db.py` to include href overrides and parent slugs for profitability, opex, and capex planners.
- Modified `NavigationService` to handle child links and href overrides, ensuring proper routing when context is missing.
- Adjusted scenario detail and list templates to use new route names for opex and capex forms, with legacy fallbacks.
- Introduced integration tests for legacy calculation routes to ensure proper redirection and error handling.
- Added tests for navigation sidebar to validate role-based access and link visibility.
- Enhanced navigation sidebar tests to include calculation links and contextual URLs based on project and scenario IDs.
2025-11-13 20:23:53 +01:00
ed8e05147c feat: update status codes and navigation structure in calculations and reports routes 2025-11-13 17:14:17 +01:00
522b1e4105 feat: add scenarios list page with metrics and quick actions
Some checks failed
CI / test (push) Has been skipped
CI / build (push) Has been skipped
CI / lint (push) Failing after 15s
CI / deploy (push) Has been skipped
- Introduced a new template for listing scenarios associated with a project.
- Added metrics for total, active, draft, and archived scenarios.
- Implemented quick actions for creating new scenarios and reviewing project overview.
- Enhanced navigation with breadcrumbs for better user experience.

refactor: update Opex and Profitability templates for consistency

- Changed titles and button labels for clarity in Opex and Profitability templates.
- Updated form IDs and action URLs for better alignment with new naming conventions.
- Improved navigation links to include scenario and project overviews.

test: add integration tests for Opex calculations

- Created new tests for Opex calculation HTML and JSON flows.
- Validated successful calculations and ensured correct data persistence.
- Implemented tests for currency mismatch and unsupported frequency scenarios.

test: enhance project and scenario route tests

- Added tests to verify scenario list rendering and calculator shortcuts.
- Ensured scenario detail pages link back to the portfolio correctly.
- Validated project detail pages show associated scenarios accurately.
2025-11-13 16:21:36 +01:00
4f00bf0d3c feat: Add CRUD tests for project and scenario models 2025-11-13 11:06:39 +01:00
3551b0356d feat: Add comprehensive test suite for project and scenario models 2025-11-13 11:05:36 +01:00
521a8abc2d feat: Migrate to Pydantic's @field_validator and implement lifespan handler in FastAPI 2025-11-13 09:54:09 +01:00
1feae7ff85 feat: Add Processing Opex functionality
- Introduced OpexValidationError for handling validation errors in processing opex calculations.
- Implemented ProjectProcessingOpexRepository and ScenarioProcessingOpexRepository for managing project and scenario-level processing opex snapshots.
- Enhanced UnitOfWork to include repositories for processing opex.
- Updated sidebar navigation and scenario detail templates to include links to the new Processing Opex Planner.
- Created a new template for the Processing Opex Planner with form handling for input components and parameters.
- Developed integration tests for processing opex calculations, covering HTML and JSON flows, including validation for currency mismatches and unsupported frequencies.
- Added unit tests for the calculation logic, ensuring correct handling of various scenarios and edge cases.
2025-11-13 09:26:57 +01:00
1240b08740 feat: Persist initial capex calculations and enhance navigation links in UI 2025-11-12 23:52:06 +01:00
d9fd82b2e3 feat: Implement initial capex calculation feature
- Added CapexComponentInput, CapexParameters, CapexCalculationRequest, CapexCalculationResult, and related schemas for capex calculations.
- Introduced calculate_initial_capex function to aggregate capex components and compute totals and timelines.
- Created ProjectCapexRepository and ScenarioCapexRepository for managing capex snapshots in the database.
- Developed capex.html template for capturing and displaying initial capex data.
- Registered common Jinja2 filters for formatting currency and percentages.
- Implemented unit and integration tests for capex calculation functionality.
- Updated unit of work to include new repositories for capex management.
2025-11-12 23:51:52 +01:00
6c1570a254 feat: Update favicon handling to use FileResponse and add favicon.ico 2025-11-12 22:42:09 +01:00
b1a6df9f90 feat: Add profitability calculation schemas and service functions
- Introduced Pydantic schemas for profitability calculations in `schemas/calculations.py`.
- Implemented service functions for profitability calculations in `services/calculations.py`.
- Added new exception class `ProfitabilityValidationError` for handling validation errors.
- Created repositories for managing project and scenario profitability snapshots.
- Developed a utility script for verifying authenticated routes.
- Added a new HTML template for the profitability calculator interface.
- Implemented a script to fix user ID sequence in the database.
2025-11-12 22:22:29 +01:00
6d496a599e feat: Resolve test suite regressions and enhance token tamper detection
feat: Add UI router to application for improved routing
style: Update breadcrumb styles in main.css and remove redundant styles from scenarios.css
2025-11-12 20:30:40 +01:00
1199813da0 feat: Add plotly to requirements for enhanced data visualization 2025-11-12 19:42:09 +01:00
acf6f50bbd feat: Add NPV comparison and distribution charts to reporting
Some checks failed
CI / lint (push) Successful in 15s
CI / build (push) Has been skipped
CI / test (push) Failing after 17s
CI / deploy (push) Has been skipped
- Implemented NPV comparison chart generation using Plotly in ReportingService.
- Added distribution histogram for Monte Carlo results.
- Updated reporting templates to include new charts and improved layout.
- Created new settings and currencies management pages.
- Enhanced sidebar navigation with dynamic URL handling.
- Improved CSS styles for chart containers and overall layout.
- Added new simulation and theme settings pages with placeholders for future features.
2025-11-12 19:39:27 +01:00
ad306bd0aa feat: Refactor database initialization for SQLite compatibility 2025-11-12 18:30:35 +01:00
ed4187970c feat: Implement SQLite support with environment-driven backend switching 2025-11-12 18:29:49 +01:00
0fbe9f543e fix: Update .gitignore to include additional SQLite database files 2025-11-12 18:21:39 +01:00
80825c2c5d chore: Update changelog with recent verification and documentation updates 2025-11-12 18:17:09 +01:00
44a3bfc1bf fix: Remove unnecessary 'uvicorn' command from docker-compose.override.yml 2025-11-12 18:17:04 +01:00
1f892ebdbb feat: Implement SQLAlchemy enum helper and normalize enum values in database initialization 2025-11-12 18:11:19 +01:00
bcdc9e861e feat: Enhance CSS with custom properties for theming and layout adjustments 2025-11-12 18:11:02 +01:00
23523f70f1 feat: Add comprehensive tests for database initialization and seeding 2025-11-12 16:38:20 +01:00
8ef6724960 feat: Add database initialization, reset, and verification scripts 2025-11-12 16:30:17 +01:00
6e466a3fd2 Refactor database initialization and remove Alembic migrations
- Removed legacy Alembic migration files and consolidated schema management into a new Pydantic-backed initializer (`scripts/init_db.py`).
- Updated `main.py` to ensure the new DB initializer runs on startup, maintaining idempotency.
- Adjusted session management in `config/database.py` to prevent DetachedInstanceError.
- Introduced new enums in `models/enums.py` for better organization and clarity.
- Refactored various models to utilize the new enums, improving code maintainability.
- Enhanced middleware to handle JSON validation more robustly, ensuring non-JSON requests do not trigger JSON errors.
- Added tests for middleware and enums to ensure expected behavior and consistency.
- Updated changelog to reflect significant changes and improvements.
2025-11-12 16:29:44 +01:00
9d4c807475 feat: Update logo images in footer and header templates 2025-11-12 16:00:11 +01:00
9cd555e134 feat: Add pre-commit configuration for code quality tools 2025-11-12 12:07:39 +01:00
e72e297c61 feat: Add CI workflow for linting, testing, and building the project
Some checks failed
CI / test (push) Has been skipped
CI / build (push) Has been skipped
CI / lint (push) Failing after 14s
CI / deploy (push) Has been skipped
2025-11-12 12:00:56 +01:00
101d9309fd chore: Update changelog to reflect changes made on 2025-11-12 2025-11-12 12:00:04 +01:00
9556f9e1f1 refactor: Replace local Base declaration with import from config.database 2025-11-12 11:59:02 +01:00
4488cacdc9 chore: Update changelog with Bandit security scan remediation details
Some checks failed
CI / deploy (push) Has been skipped
CI / test (push) Has been skipped
CI / build (push) Has been skipped
CI / lint (push) Failing after 13s
2025-11-12 11:56:05 +01:00
e06a6ae068 feat: Implement random password and token generation for tests 2025-11-12 11:53:44 +01:00
3bdae3c54c fix: Update Bandit command in CI workflows to run checks on tests directory 2025-11-12 11:53:34 +01:00
d89b09fa80 fix: Remove 'tests' from Bandit exclude_dirs to ensure security checks cover all test files 2025-11-12 11:44:09 +01:00
2214bbe64f feat: Add Bandit security checks to CI workflows 2025-11-12 11:43:57 +01:00
5d6592d657 feat: Use secure random tokens for authentication and password handling in tests 2025-11-12 11:36:19 +01:00
3988171b46 feat: Add initial Bandit configuration for security checks 2025-11-12 11:36:13 +01:00
1520724cab fix: Add support for additional environment variable files in .gitignore 2025-11-12 11:34:29 +01:00
014d96c105 fix: Comment out pip cache steps in CI workflow
Some checks failed
CI / build (push) Has been skipped
CI / deploy (push) Has been skipped
CI / test (push) Has been skipped
CI / lint (push) Failing after 15s
2025-11-12 11:26:08 +01:00
55fa1f56c1 fix: Update branch list in CI workflow to include 'v2'
Some checks failed
CI / build (push) Has been cancelled
CI / test (push) Has been cancelled
CI / deploy (push) Has been cancelled
CI / lint (push) Has been cancelled
2025-11-12 11:23:35 +01:00
edf86a5447 Update templates/dashboard.html
Some checks failed
CI / build (push) Has been cancelled
CI / test (push) Has been cancelled
2025-11-12 11:22:33 +01:00
53eacc352e feat: Enhance deploy job to collect and upload Kubernetes deployment logs for staging and production
Some checks failed
CI / lint (push) Successful in 15s
CI / test (push) Failing after 42s
CI / build (push) Has been skipped
2025-11-12 11:15:09 +01:00
2bfa498624 fix: Remove Playwright installation steps from CI workflow
Some checks failed
CI / lint (push) Successful in 14s
CI / test (push) Failing after 43s
CI / build (push) Has been skipped
2025-11-12 11:12:55 +01:00
4cfc5d9ffa fix: Resolve Ruff E402 warnings and clean up imports across multiple modules
Some checks failed
CI / lint (push) Successful in 15s
CI / test (push) Failing after 27s
CI / build (push) Has been skipped
2025-11-12 11:10:50 +01:00
ce7f4aa776 fix: Correct syntax for apt proxy configuration in CI workflow
Some checks failed
CI / lint (push) Failing after 41s
CI / test (push) Has been skipped
CI / build (push) Has been skipped
2025-11-12 10:56:45 +01:00
e0497f58f0 fix: Correct escaping in apt proxy configuration in CI workflow
Some checks failed
CI / lint (push) Failing after 5s
CI / test (push) Has been skipped
CI / build (push) Has been skipped
2025-11-12 10:55:19 +01:00
60410fd71d fix: Comment out pip cache dependencies in CI workflow
Some checks failed
CI / lint (push) Failing after 5s
CI / test (push) Has been skipped
CI / build (push) Has been skipped
2025-11-12 10:54:23 +01:00
f55c77312d fix: Simplify pip cache directory handling in CI workflow
Some checks failed
CI / build (push) Has been cancelled
CI / test (push) Has been cancelled
CI / lint (push) Has been cancelled
2025-11-12 10:52:46 +01:00
63ec4a6953 fix: Update pip cache directory usage in CI workflow
Some checks failed
CI / lint (push) Failing after 7s
CI / test (push) Has been skipped
CI / build (push) Has been skipped
2025-11-12 10:51:58 +01:00
b0ff79ae9c fix: Update pip cache directory handling in CI workflow
Some checks failed
CI / lint (push) Failing after 8s
CI / test (push) Has been skipped
CI / build (push) Has been skipped
2025-11-12 10:51:00 +01:00
0670d05722 fix: Update pip cache directory configuration in CI workflow
Some checks failed
CI / lint (push) Failing after 9s
CI / test (push) Has been skipped
CI / build (push) Has been skipped
2025-11-12 10:48:31 +01:00
0694d4ec4b fix: Correct Python version syntax in CI workflow
Some checks failed
CI / lint (push) Failing after 35s
CI / build (push) Has been skipped
CI / test (push) Has been skipped
2025-11-12 10:45:04 +01:00
ce9c174b53 feat: Enhance project and scenario creation with monitoring metrics
Some checks failed
CI / lint (push) Failing after 1m14s
CI / test (push) Has been skipped
CI / build (push) Has been skipped
- Added monitoring metrics for project creation success and error handling in `ProjectRepository`.
- Implemented similar monitoring for scenario creation in `ScenarioRepository`.
- Refactored `run_monte_carlo` function in `simulation.py` to include timing and success/error metrics.
- Introduced new CSS styles for headers, alerts, and navigation buttons in `main.css` and `projects.css`.
- Created a new JavaScript file for navigation logic to handle chevron buttons.
- Updated HTML templates to include new navigation buttons and improved styling for buttons.
- Added tests for reporting service and routes to ensure proper functionality and access control.
- Removed unused imports and optimized existing test files for better clarity and performance.
2025-11-12 10:36:24 +01:00
f68321cd04 feat: Add CI workflow for linting, testing, and building Docker images
Some checks failed
CI / lint (push) Failing after 1m10s
CI / test (push) Has been skipped
CI / build (push) Has been skipped
2025-11-11 18:56:41 +01:00
44ff4d0e62 feat: Update Python version to 3.12 and use environment variable for Docker image name 2025-11-11 18:41:24 +01:00
4364927965 Refactor Docker setup and migration scripts
- Updated Dockerfile to set permissions for the entrypoint script and defined the entrypoint for the container.
- Consolidated Alembic migration history into a single initial migration file and removed obsolete revision files.
- Added a new script to run Alembic migrations before starting the application.
- Updated changelog to reflect changes in migration handling and Docker setup.
- Enhanced pytest configuration for coverage reporting and excluded specific files from coverage calculations.
2025-11-11 18:30:15 +01:00
795a9f99f4 feat: Enhance currency handling and validation across scenarios
- Updated form template to prefill currency input with default value and added help text for clarity.
- Modified integration tests to assert more descriptive error messages for invalid currency codes.
- Introduced new tests for currency normalization and validation in various scenarios, including imports and exports.
- Added comprehensive tests for pricing calculations, ensuring defaults are respected and overrides function correctly.
- Implemented unit tests for pricing settings repository, ensuring CRUD operations and default settings are handled properly.
- Enhanced scenario pricing evaluation tests to validate currency handling and metadata defaults.
- Added simulation tests to ensure Monte Carlo runs are accurate and handle various distribution scenarios.
2025-11-11 18:29:59 +01:00
032e6d2681 feat: implement persistent audit logging for import/export operations with Prometheus metrics 2025-11-10 21:37:07 +01:00
51c0fcec95 feat: add import dashboard UI and functionality for CSV and Excel uploads 2025-11-10 19:06:27 +01:00
3051f91ab0 feat: add export button for projects in the projects list view 2025-11-10 18:50:46 +01:00
e2465188c2 feat: enhance export and import workflows with improved error handling and notifications 2025-11-10 18:44:42 +01:00
43b1e53837 feat: implement export functionality for projects and scenarios with CSV and Excel support 2025-11-10 18:32:24 +01:00
4b33a5dba3 feat: add Excel export functionality with support for metadata and customizable sheets 2025-11-10 18:32:09 +01:00
5f183faa63 feat: implement CSV export functionality with customizable columns and formatters 2025-11-10 15:36:14 +01:00
1a7581cda0 feat: add export filters for projects and scenarios with filtering capabilities 2025-11-10 15:36:06 +01:00
b1a0153a8d feat: expand import ingestion workflow with staging previews, transactional commits, and new API tests 2025-11-10 10:14:42 +01:00
609b0d779f feat: add import routes and ingestion service for project and scenario imports 2025-11-10 09:28:32 +01:00
eaef99f0ac feat: enhance import functionality with commit results and summary models for projects and scenarios 2025-11-10 09:20:41 +01:00
3bc124c11f feat: implement import functionality for projects and scenarios with CSV/XLSX support, including validation and error handling 2025-11-10 09:10:47 +01:00
7058eb4172 feat: add default administrative credentials and reset options to environment configuration 2025-11-10 09:10:08 +01:00
e0fa3861a6 feat: complete Authentication & RBAC checklist by finalizing models, migrations, repositories, guard dependencies, and integration tests 2025-11-10 07:59:42 +01:00
ab328b1a0b feat: implement environment-driven admin bootstrap settings and retire legacy RBAC documentation 2025-11-09 23:46:51 +01:00
24cb3c2f57 feat: implement admin bootstrap settings and ensure default roles and admin account 2025-11-09 23:43:13 +01:00
118657491c feat: add tests for authorization guards and role-based access control 2025-11-09 23:27:10 +01:00
0f79864188 feat: enhance project and scenario management with role-based access control
- Implemented role-based access control for project and scenario routes.
- Added authorization checks to ensure users have appropriate roles for viewing and managing projects and scenarios.
- Introduced utility functions for ensuring project and scenario access based on user roles.
- Refactored project and scenario routes to utilize new authorization helpers.
- Created initial data seeding script to set up default roles and an admin user.
- Added tests for authorization helpers and initial data seeding functionality.
- Updated exception handling to include authorization errors.
2025-11-09 23:14:54 +01:00
27262bdfa3 feat: Implement session management with middleware and update authentication flow 2025-11-09 23:14:41 +01:00
3601c2e422 feat: Implement user and role management with repositories
- Added RoleRepository and UserRepository for managing roles and users.
- Implemented methods for creating, retrieving, and assigning roles to users.
- Introduced functions to ensure default roles and an admin user exist in the system.
- Updated UnitOfWork to include user and role repositories.
- Created new security module for password hashing and JWT token management.
- Added tests for authentication flows, including registration, login, and password reset.
- Enhanced HTML templates for user registration, login, and password management with error handling.
- Added a logo image to the static assets.
2025-11-09 21:48:35 +01:00
53879a411f feat: implement user and role models with password hashing, and add tests for user functionality 2025-11-09 21:45:29 +01:00
2d848c2e09 feat: add integration tests for project and scenario lifecycles, update templates to new Starlette signature, and optimize project retrieval logic 2025-11-09 19:47:35 +01:00
dad862e48e feat: reorder project route registration to prioritize static UI paths and add pytest coverage for navigation endpoints 2025-11-09 19:21:25 +01:00
400f85c907 feat: enhance project and scenario detail pages with metrics, improved layouts, and updated styles 2025-11-09 19:15:48 +01:00
7f5ed6a42d feat: enhance dashboard with new metrics, project and scenario utilities, and comprehensive tests 2025-11-09 19:02:36 +01:00
053da332ac feat: add dashboard route, template, and styles for project and scenario insights 2025-11-09 18:50:00 +01:00
02da881d3e feat: implement scenario comparison validation and API endpoint with comprehensive unit tests 2025-11-09 18:42:04 +01:00
c39dde3198 feat: enhance UI with responsive sidebar toggle and filter functionality for projects and scenarios 2025-11-09 17:48:55 +01:00
faea6777a0 feat: add CSS styles and JavaScript functionality for projects and scenarios, including filtering and layout enhancements 2025-11-09 17:36:31 +01:00
d36611606d feat: connect project and scenario routers to new Jinja2 views with forms and error handling 2025-11-09 17:32:23 +01:00
191500aeb7 feat: add project and scenario templates for detailed views and forms 2025-11-09 17:27:46 +01:00
61b42b3041 feat: implement CRUD APIs for projects and scenarios with validated schemas 2025-11-09 17:23:10 +01:00
8bf46b80c8 feat: add pytest coverage for repository and unit-of-work behaviors 2025-11-09 17:17:42 +01:00
c69f933684 feat: implement repository and unit-of-work patterns for service layer operations 2025-11-09 16:59:58 +01:00
c6fdc2d923 feat: add initial schema and update changelog for database models 2025-11-09 16:57:58 +01:00
dc3ebfbba5 feat: add initial Alembic configuration files for database migrations 2025-11-09 16:57:32 +01:00
32a96a27c5 feat: enhance database models with metadata and new resource types 2025-11-09 16:54:46 +01:00
203a5d08f2 feat: add initial database models and changelog for financial inputs and projects 2025-11-09 16:50:14 +01:00
204 changed files with 31718 additions and 387 deletions

25
.env.development Normal file
View File

@@ -0,0 +1,25 @@
# Development Environment Configuration
ENVIRONMENT=development
DEBUG=true
LOG_LEVEL=DEBUG
# Database Configuration
DATABASE_HOST=postgres
DATABASE_PORT=5432
DATABASE_USER=calminer
DATABASE_PASSWORD=calminer_password
DATABASE_NAME=calminer_db
DATABASE_DRIVER=postgresql
# Application Settings
CALMINER_EXPORT_MAX_ROWS=1000
CALMINER_IMPORT_MAX_ROWS=10000
CALMINER_EXPORT_METADATA=true
CALMINER_IMPORT_STAGING_TTL=300
# Admin Seeding (for development)
CALMINER_SEED_ADMIN_EMAIL=admin@calminer.local
CALMINER_SEED_ADMIN_USERNAME=admin
CALMINER_SEED_ADMIN_PASSWORD=ChangeMe123!
CALMINER_SEED_ADMIN_ROLES=admin
CALMINER_SEED_FORCE=false

View File

@@ -10,5 +10,13 @@ DATABASE_NAME=calminer
# Optional: set a schema (comma-separated for multiple entries)
# DATABASE_SCHEMA=public
# Legacy fallback (still supported, but granular settings are preferred)
# DATABASE_URL=postgresql://<user>:<password>@localhost:5432/calminer
# Default administrative credentials are provided at deployment time through environment variables
# (`CALMINER_SEED_ADMIN_EMAIL`, `CALMINER_SEED_ADMIN_USERNAME`, `CALMINER_SEED_ADMIN_PASSWORD`, `CALMINER_SEED_ADMIN_ROLES`).
# These values are consumed by a shared bootstrap helper on application startup, ensuring mandatory roles and the administrator account exist before any user interaction.
CALMINER_SEED_ADMIN_EMAIL=<email>
CALMINER_SEED_ADMIN_USERNAME=<username>
CALMINER_SEED_ADMIN_PASSWORD=<password>
CALMINER_SEED_ADMIN_ROLES=<roles>
# Operators can request a managed credential reset by setting `CALMINER_SEED_FORCE=true`.
# On the next startup the helper rotates the admin password and reapplies role assignments, so downstream environments must update stored secrets immediately after the reset.
# CALMINER_SEED_FORCE=false

25
.env.production Normal file
View File

@@ -0,0 +1,25 @@
# Production Environment Configuration
ENVIRONMENT=production
DEBUG=false
LOG_LEVEL=WARNING
# Database Configuration (MUST be set externally - no defaults)
DATABASE_HOST=
DATABASE_PORT=5432
DATABASE_USER=
DATABASE_PASSWORD=
DATABASE_NAME=
DATABASE_DRIVER=postgresql
# Application Settings
CALMINER_EXPORT_MAX_ROWS=100000
CALMINER_IMPORT_MAX_ROWS=100000
CALMINER_EXPORT_METADATA=true
CALMINER_IMPORT_STAGING_TTL=3600
# Admin Seeding (for production - set strong password)
CALMINER_SEED_ADMIN_EMAIL=admin@calminer.com
CALMINER_SEED_ADMIN_USERNAME=admin
CALMINER_SEED_ADMIN_PASSWORD=CHANGE_THIS_VERY_STRONG_PASSWORD
CALMINER_SEED_ADMIN_ROLES=admin
CALMINER_SEED_FORCE=false

25
.env.staging Normal file
View File

@@ -0,0 +1,25 @@
# Staging Environment Configuration
ENVIRONMENT=staging
DEBUG=false
LOG_LEVEL=INFO
# Database Configuration (override with actual staging values)
DATABASE_HOST=postgres
DATABASE_PORT=5432
DATABASE_USER=calminer_staging
DATABASE_PASSWORD=CHANGE_THIS_STRONG_PASSWORD
DATABASE_NAME=calminer_staging_db
DATABASE_DRIVER=postgresql
# Application Settings
CALMINER_EXPORT_MAX_ROWS=50000
CALMINER_IMPORT_MAX_ROWS=50000
CALMINER_EXPORT_METADATA=true
CALMINER_IMPORT_STAGING_TTL=600
# Admin Seeding (for staging)
CALMINER_SEED_ADMIN_EMAIL=admin@staging.calminer.com
CALMINER_SEED_ADMIN_USERNAME=admin
CALMINER_SEED_ADMIN_PASSWORD=CHANGE_THIS_STRONG_PASSWORD
CALMINER_SEED_ADMIN_ROLES=admin
CALMINER_SEED_FORCE=false

3
.gitattributes vendored Normal file
View File

@@ -0,0 +1,3 @@
* text=auto
Dockerfile text eol=lf

View File

@@ -0,0 +1,232 @@
name: CI - Build
on:
workflow_call:
workflow_dispatch:
jobs:
build:
outputs:
allow_push: ${{ steps.meta.outputs.allow_push }}
ref_name: ${{ steps.meta.outputs.ref_name }}
event_name: ${{ steps.meta.outputs.event_name }}
sha: ${{ steps.meta.outputs.sha }}
runs-on: ubuntu-latest
env:
DEFAULT_BRANCH: main
REGISTRY_URL: ${{ secrets.REGISTRY_URL }}
REGISTRY_USERNAME: ${{ secrets.REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.REGISTRY_PASSWORD }}
REGISTRY_CONTAINER_NAME: calminer
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Collect workflow metadata
id: meta
shell: bash
env:
DEFAULT_BRANCH: ${{ env.DEFAULT_BRANCH }}
run: |
git_ref="${GITEA_REF:-${GITHUB_REF:-}}"
ref_name="${GITEA_REF_NAME:-${GITHUB_REF_NAME:-}}"
if [ -z "$ref_name" ] && [ -n "$git_ref" ]; then
ref_name="${git_ref##*/}"
fi
event_name="${GITEA_EVENT_NAME:-${GITHUB_EVENT_NAME:-}}"
sha="${GITEA_SHA:-${GITHUB_SHA:-}}"
if [ -z "$sha" ]; then
sha="$(git rev-parse HEAD)"
fi
if [ "$ref_name" = "${DEFAULT_BRANCH:-main}" ] && [ "$event_name" != "pull_request" ]; then
echo "allow_push=true" >> "$GITHUB_OUTPUT"
else
echo "allow_push=false" >> "$GITHUB_OUTPUT"
fi
echo "ref_name=$ref_name" >> "$GITHUB_OUTPUT"
echo "event_name=$event_name" >> "$GITHUB_OUTPUT"
echo "sha=$sha" >> "$GITHUB_OUTPUT"
- name: Validate registry configuration
shell: bash
run: |
set -euo pipefail
if [ -z "${REGISTRY_URL}" ]; then
echo "::error::REGISTRY_URL secret not configured. Configure it with your Gitea container registry host." >&2
exit 1
fi
server_url="${GITEA_SERVER_URL:-${GITHUB_SERVER_URL:-}}"
server_host="${server_url#http://}"
server_host="${server_host#https://}"
server_host="${server_host%%/*}"
server_host="${server_host%%:*}"
registry_host="${REGISTRY_URL#http://}"
registry_host="${registry_host#https://}"
registry_host="${registry_host%%/*}"
registry_host="${registry_host%%:*}"
if [ -n "${server_host}" ] && ! printf '%s' "${registry_host}" | grep -qi "${server_host}"; then
echo "::warning::REGISTRY_URL (${REGISTRY_URL}) does not match current Gitea host (${server_host}). Ensure this registry endpoint is managed by Gitea." >&2
fi
registry_repository="${registry_host}/allucanget/${REGISTRY_CONTAINER_NAME}"
echo "REGISTRY_HOST=${registry_host}" >> "$GITHUB_ENV"
echo "REGISTRY_REPOSITORY=${registry_repository}" >> "$GITHUB_ENV"
- name: Set up QEMU and Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to gitea registry
if: ${{ steps.meta.outputs.allow_push == 'true' }}
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY_HOST }}
username: ${{ env.REGISTRY_USERNAME }}
password: ${{ env.REGISTRY_PASSWORD }}
- name: Build image
id: build-image
env:
REGISTRY_REPOSITORY: ${{ env.REGISTRY_REPOSITORY }}
REGISTRY_CONTAINER_NAME: ${{ env.REGISTRY_CONTAINER_NAME }}
SHA_TAG: ${{ steps.meta.outputs.sha }}
PUSH_IMAGE: ${{ steps.meta.outputs.allow_push == 'true' && env.REGISTRY_HOST != '' && env.REGISTRY_USERNAME != '' && env.REGISTRY_PASSWORD != '' }}
run: |
set -eo pipefail
LOG_FILE=build.log
if [ "${PUSH_IMAGE}" = "true" ]; then
docker buildx build \
--load \
--tag "${REGISTRY_REPOSITORY}:latest" \
--tag "${REGISTRY_REPOSITORY}:${SHA_TAG}" \
--file Dockerfile \
. 2>&1 | tee "${LOG_FILE}"
else
docker buildx build \
--load \
--tag "${REGISTRY_CONTAINER_NAME}:ci" \
--file Dockerfile \
. 2>&1 | tee "${LOG_FILE}"
fi
- name: Push image
if: ${{ steps.meta.outputs.allow_push == 'true' }}
env:
REGISTRY_REPOSITORY: ${{ env.REGISTRY_REPOSITORY }}
SHA_TAG: ${{ steps.meta.outputs.sha }}
run: |
set -euo pipefail
if [ -z "${REGISTRY_REPOSITORY}" ]; then
echo "::error::REGISTRY_REPOSITORY not defined; cannot push image" >&2
exit 1
fi
docker push "${REGISTRY_REPOSITORY}:${SHA_TAG}"
docker push "${REGISTRY_REPOSITORY}:latest"
- name: Upload docker build logs
if: failure()
uses: actions/upload-artifact@v4
with:
name: docker-build-logs
path: build.log
deploy:
needs: build
if: needs.build.outputs.allow_push == 'true'
runs-on: ubuntu-latest
env:
REGISTRY_URL: ${{ secrets.REGISTRY_URL }}
REGISTRY_CONTAINER_NAME: calminer
KUBE_CONFIG: ${{ secrets.KUBE_CONFIG }}
STAGING_KUBE_CONFIG: ${{ secrets.STAGING_KUBE_CONFIG }}
PROD_KUBE_CONFIG: ${{ secrets.PROD_KUBE_CONFIG }}
K8S_DEPLOY_ENABLED: ${{ secrets.K8S_DEPLOY_ENABLED }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Resolve registry repository
run: |
set -euo pipefail
if [ -z "${REGISTRY_URL}" ]; then
echo "::error::REGISTRY_URL secret not configured. Configure it with your Gitea container registry host." >&2
exit 1
fi
registry_host="${REGISTRY_URL#http://}"
registry_host="${registry_host#https://}"
registry_host="${registry_host%%/*}"
registry_host="${registry_host%%:*}"
registry_repository="${registry_host}/allucanget/${REGISTRY_CONTAINER_NAME}"
echo "REGISTRY_HOST=${registry_host}" >> "$GITHUB_ENV"
echo "REGISTRY_REPOSITORY=${registry_repository}" >> "$GITHUB_ENV"
- name: Report Kubernetes deployment toggle
run: |
set -euo pipefail
enabled="${K8S_DEPLOY_ENABLED:-}"
if [ "${enabled}" = "true" ]; then
echo "Kubernetes deployment is enabled for this run."
else
echo "::notice::Kubernetes deployment steps are disabled (set secrets.K8S_DEPLOY_ENABLED to 'true' to enable)."
fi
- name: Capture commit metadata
id: commit_meta
run: |
set -euo pipefail
message="$(git log -1 --pretty=%B | tr '\n' ' ')"
echo "message=$message" >> "$GITHUB_OUTPUT"
- name: Set up kubectl for staging
if: env.K8S_DEPLOY_ENABLED == 'true' && contains(steps.commit_meta.outputs.message, '[deploy staging]')
uses: azure/k8s-set-context@v3
with:
method: kubeconfig
kubeconfig: ${{ env.STAGING_KUBE_CONFIG }}
- name: Set up kubectl for production
if: env.K8S_DEPLOY_ENABLED == 'true' && contains(steps.commit_meta.outputs.message, '[deploy production]')
uses: azure/k8s-set-context@v3
with:
method: kubeconfig
kubeconfig: ${{ env.PROD_KUBE_CONFIG }}
- name: Deploy to staging
if: env.K8S_DEPLOY_ENABLED == 'true' && contains(steps.commit_meta.outputs.message, '[deploy staging]')
run: |
kubectl set image deployment/calminer-app calminer=${REGISTRY_REPOSITORY}:latest
kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/secret.yaml
kubectl rollout status deployment/calminer-app
- name: Collect staging deployment logs
if: env.K8S_DEPLOY_ENABLED == 'true' && contains(steps.commit_meta.outputs.message, '[deploy staging]')
run: |
mkdir -p logs/deployment/staging
kubectl get pods -o wide > logs/deployment/staging/pods.txt
kubectl get deployment calminer-app -o yaml > logs/deployment/staging/deployment.yaml
kubectl logs deployment/calminer-app --all-containers=true --tail=500 > logs/deployment/staging/calminer-app.log
- name: Deploy to production
if: env.K8S_DEPLOY_ENABLED == 'true' && contains(steps.commit_meta.outputs.message, '[deploy production]')
run: |
kubectl set image deployment/calminer-app calminer=${REGISTRY_REPOSITORY}:latest
kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/secret.yaml
kubectl rollout status deployment/calminer-app
- name: Collect production deployment logs
if: env.K8S_DEPLOY_ENABLED == 'true' && contains(steps.commit_meta.outputs.message, '[deploy production]')
run: |
mkdir -p logs/deployment/production
kubectl get pods -o wide > logs/deployment/production/pods.txt
kubectl get deployment calminer-app -o yaml > logs/deployment/production/deployment.yaml
kubectl logs deployment/calminer-app --all-containers=true --tail=500 > logs/deployment/production/calminer-app.log
- name: Upload deployment logs
if: always()
uses: actions/upload-artifact@v4
with:
name: deployment-logs
path: logs/deployment
if-no-files-found: ignore

View File

@@ -0,0 +1,44 @@
name: CI - Lint
on:
workflow_call:
workflow_dispatch:
jobs:
lint:
runs-on: ubuntu-latest
env:
APT_CACHER_NG: http://192.168.88.14:3142
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.12"
- name: Configure apt proxy
run: |
if [ -n "${APT_CACHER_NG}" ]; then
echo "Acquire::http::Proxy \"${APT_CACHER_NG}\";" | tee /etc/apt/apt.conf.d/01apt-cacher-ng
fi
- name: Install system packages
run: |
apt-get update
apt-get install -y build-essential libpq-dev
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-test.txt
- name: Run Ruff
run: ruff check .
- name: Run Black
run: black --check .
- name: Run Bandit
run: bandit -c pyproject.toml -r tests

View File

@@ -0,0 +1,73 @@
name: CI - Test
on:
workflow_call:
workflow_dispatch:
jobs:
test:
runs-on: ubuntu-latest
env:
APT_CACHER_NG: http://192.168.88.14:3142
DB_DRIVER: postgresql+psycopg2
DB_HOST: 192.168.88.35
DB_NAME: calminer_test
DB_USER: calminer
DB_PASSWORD: calminer_password
services:
postgres:
image: postgres:17
env:
POSTGRES_USER: ${{ env.DB_USER }}
POSTGRES_PASSWORD: ${{ env.DB_PASSWORD }}
POSTGRES_DB: ${{ env.DB_NAME }}
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.12"
- name: Configure apt proxy
run: |
if [ -n "${APT_CACHER_NG}" ]; then
echo "Acquire::http::Proxy \"${APT_CACHER_NG}\";" | tee /etc/apt/apt.conf.d/01apt-cacher-ng
fi
- name: Install system packages
run: |
apt-get update
apt-get install -y build-essential libpq-dev
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-test.txt
- name: Run tests
env:
DATABASE_DRIVER: ${{ env.DB_DRIVER }}
DATABASE_HOST: postgres
DATABASE_PORT: 5432
DATABASE_USER: ${{ env.DB_USER }}
DATABASE_PASSWORD: ${{ env.DB_PASSWORD }}
DATABASE_NAME: ${{ env.DB_NAME }}
run: |
pytest --cov=. --cov-report=term-missing --cov-report=xml --cov-fail-under=80 --junitxml=pytest-report.xml
- name: Upload test artifacts
if: always()
uses: actions/upload-artifact@v3
with:
name: test-artifacts
path: |
coverage.xml
pytest-report.xml

30
.gitea/workflows/ci.yml Normal file
View File

@@ -0,0 +1,30 @@
name: CI
on:
push:
branches:
- main
- develop
- v2
pull_request:
branches:
- main
- develop
workflow_dispatch:
jobs:
lint:
uses: ./.gitea/workflows/ci-lint.yml
secrets: inherit
test:
needs: lint
uses: ./.gitea/workflows/ci-test.yml
secrets: inherit
build:
needs:
- lint
- test
uses: ./.gitea/workflows/ci-build.yml
secrets: inherit

View File

@@ -1,141 +0,0 @@
name: CI
on:
push:
branches: [main, develop]
pull_request:
branches: [main, develop]
jobs:
test:
env:
APT_CACHER_NG: http://192.168.88.14:3142
DB_DRIVER: postgresql+psycopg2
DB_HOST: 192.168.88.35
DB_NAME: calminer_test
DB_USER: calminer
DB_PASSWORD: calminer_password
runs-on: ubuntu-latest
services:
postgres:
image: postgres:17
env:
POSTGRES_USER: ${{ env.DB_USER }}
POSTGRES_PASSWORD: ${{ env.DB_PASSWORD }}
POSTGRES_DB: ${{ env.DB_NAME }}
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Get pip cache dir
id: pip-cache
run: |
echo "path=$(pip cache dir)" >> $GITEA_OUTPUT
echo "Pip cache dir: $(pip cache dir)"
- name: Cache pip dependencies
uses: actions/cache@v4
with:
path: ${{ steps.pip-cache.outputs.path }}
key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt', 'requirements-test.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Update apt-cacher-ng config
run: |-
echo 'Acquire::http::Proxy "{{ env.APT_CACHER_NG }}";' | tee /etc/apt/apt.conf.d/01apt-cacher-ng
apt-get update
- name: Update system packages
run: apt-get upgrade -y
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-test.txt
- name: Install Playwright system dependencies
run: playwright install-deps
- name: Install Playwright browsers
run: playwright install
- name: Run tests
env:
DATABASE_DRIVER: ${{ env.DB_DRIVER }}
DATABASE_HOST: postgres
DATABASE_PORT: 5432
DATABASE_USER: ${{ env.DB_USER }}
DATABASE_PASSWORD: ${{ env.DB_PASSWORD }}
DATABASE_NAME: ${{ env.DB_NAME }}
run: |
pytest tests/ --cov=.
- name: Build Docker image
run: |
docker build -t calminer .
build:
runs-on: ubuntu-latest
needs: test
env:
DEFAULT_BRANCH: main
REGISTRY_URL: ${{ secrets.REGISTRY_URL }}
REGISTRY_USERNAME: ${{ secrets.REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.REGISTRY_PASSWORD }}
REGISTRY_CONTAINER_NAME: calminer
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Collect workflow metadata
id: meta
shell: bash
run: |
ref_name="${GITHUB_REF_NAME:-${GITHUB_REF##*/}}"
event_name="${GITHUB_EVENT_NAME:-}"
sha="${GITHUB_SHA:-}"
if [ "$ref_name" = "${DEFAULT_BRANCH:-main}" ]; then
echo "on_default=true" >> "$GITHUB_OUTPUT"
else
echo "on_default=false" >> "$GITHUB_OUTPUT"
fi
echo "ref_name=$ref_name" >> "$GITHUB_OUTPUT"
echo "event_name=$event_name" >> "$GITHUB_OUTPUT"
echo "sha=$sha" >> "$GITHUB_OUTPUT"
- name: Set up QEMU and Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to gitea registry
if: ${{ steps.meta.outputs.on_default == 'true' }}
uses: docker/login-action@v3
continue-on-error: true
with:
registry: ${{ env.REGISTRY_URL }}
username: ${{ env.REGISTRY_USERNAME }}
password: ${{ env.REGISTRY_PASSWORD }}
- name: Build and push image
uses: docker/build-push-action@v5
with:
context: .
file: Dockerfile
push: ${{ steps.meta.outputs.on_default == 'true' && steps.meta.outputs.event_name != 'pull_request' && (env.REGISTRY_URL != '' && env.REGISTRY_USERNAME != '' && env.REGISTRY_PASSWORD != '') }}
tags: |
${{ env.REGISTRY_URL }}/allucanget/${{ env.REGISTRY_CONTAINER_NAME }}:latest
${{ env.REGISTRY_URL }}/allucanget/${{ env.REGISTRY_CONTAINER_NAME }}:${{ steps.meta.outputs.sha }}

View File

@@ -0,0 +1,105 @@
name: Deploy - Coolify
on:
push:
branches:
- main
workflow_dispatch:
jobs:
deploy:
runs-on: ubuntu-latest
env:
COOLIFY_BASE_URL: ${{ secrets.COOLIFY_BASE_URL }}
COOLIFY_API_TOKEN: ${{ secrets.COOLIFY_API_TOKEN }}
COOLIFY_APPLICATION_ID: ${{ secrets.COOLIFY_APPLICATION_ID }}
COOLIFY_DEPLOY_ENV: ${{ secrets.COOLIFY_DEPLOY_ENV }}
DOCKER_COMPOSE_PATH: docker-compose.prod.yml
ENV_FILE_PATH: deploy/.env
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Capture deployment context
id: context
run: |
set -euo pipefail
repo="${GITEA_REPOSITORY:-${GITHUB_REPOSITORY:-}}"
if [ -z "$repo" ]; then
repo="$(git remote get-url origin | sed 's#.*/\(.*\)\.git#\1#')"
fi
ref_name="${GITEA_REF_NAME:-${GITHUB_REF_NAME:-}}"
full_ref="${GITEA_REF:-${GITHUB_REF:-}}"
if [ -z "$ref_name" ] && [ -n "$full_ref" ]; then
ref_name="${full_ref##*/}"
fi
if [ -z "$ref_name" ]; then
ref_name="$(git rev-parse --abbrev-ref HEAD)"
fi
sha="${GITEA_SHA:-${GITHUB_SHA:-}}"
if [ -z "$sha" ]; then
sha="$(git rev-parse HEAD)"
fi
echo "repository=$repo" >> "$GITHUB_OUTPUT"
echo "ref=${ref_name:-main}" >> "$GITHUB_OUTPUT"
echo "sha=$sha" >> "$GITHUB_OUTPUT"
- name: Prepare compose bundle
run: |
set -euo pipefail
mkdir -p deploy
cp "$DOCKER_COMPOSE_PATH" deploy/docker-compose.yml
if [ -n "$COOLIFY_DEPLOY_ENV" ]; then
printf '%s\n' "$COOLIFY_DEPLOY_ENV" > "$ENV_FILE_PATH"
elif [ ! -f "$ENV_FILE_PATH" ]; then
echo "::error::COOLIFY_DEPLOY_ENV secret not configured and deploy/.env missing" >&2
exit 1
fi
- name: Validate Coolify secrets
run: |
set -euo pipefail
missing=0
for var in COOLIFY_BASE_URL COOLIFY_API_TOKEN COOLIFY_APPLICATION_ID; do
if [ -z "${!var}" ]; then
echo "::error::Missing required secret: $var"
missing=1
fi
done
if [ "$missing" -eq 1 ]; then
exit 1
fi
- name: Trigger deployment via Coolify API
env:
HEAD_SHA: ${{ steps.context.outputs.sha }}
run: |
set -euo pipefail
api_url="$COOLIFY_BASE_URL/api/v1/applications/${COOLIFY_APPLICATION_ID}/deploy"
payload=$(jq -n --arg sha "$HEAD_SHA" '{ commitSha: $sha }')
response=$(curl -sS -w '\n%{http_code}' \
-X POST "$api_url" \
-H "Authorization: Bearer $COOLIFY_API_TOKEN" \
-H "Content-Type: application/json" \
-d "$payload")
body=$(echo "$response" | head -n -1)
status=$(echo "$response" | tail -n1)
echo "Deploy response status: $status"
echo "$body"
printf '%s' "$body" > deploy/coolify-response.json
if [ "$status" -ge 400 ]; then
echo "::error::Deployment request failed"
exit 1
fi
- name: Upload deployment bundle
if: always()
uses: actions/upload-artifact@v3
with:
name: coolify-deploy-bundle
path: |
deploy/docker-compose.yml
deploy/.env
deploy/coolify-response.json
if-no-files-found: warn

3
.gitignore vendored
View File

@@ -17,6 +17,7 @@ env/
# environment variables
.env
*.env
.env.*
# except example files
!config/*.env.example
@@ -46,8 +47,10 @@ htmlcov/
logs/
# SQLite database
data/
*.sqlite3
test*.db
local*.db
# Act runner files
.runner

13
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,13 @@
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.6.1
hooks:
- id: ruff
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 24.8.0
hooks:
- id: black
- repo: https://github.com/PyCQA/bandit
rev: 1.7.9
hooks:
- id: bandit

View File

@@ -41,8 +41,25 @@ if url:
finally:
sock.close()
PY
APT_PROXY_CONFIG=/etc/apt/apt.conf.d/01proxy
apt_update_with_fallback() {
if ! apt-get update; then
rm -f "$APT_PROXY_CONFIG"
apt-get update
apt-get install -y --no-install-recommends build-essential gcc libpq-dev
fi
}
apt_install_with_fallback() {
if ! apt-get install -y --no-install-recommends "$@"; then
rm -f "$APT_PROXY_CONFIG"
apt-get update
apt-get install -y --no-install-recommends "$@"
fi
}
apt_update_with_fallback
apt_install_with_fallback build-essential gcc libpq-dev
pip install --upgrade pip
pip wheel --no-deps --wheel-dir /wheels -r requirements.txt
apt-get purge -y --auto-remove build-essential gcc
@@ -88,8 +105,25 @@ if url:
finally:
sock.close()
PY
APT_PROXY_CONFIG=/etc/apt/apt.conf.d/01proxy
apt_update_with_fallback() {
if ! apt-get update; then
rm -f "$APT_PROXY_CONFIG"
apt-get update
apt-get install -y --no-install-recommends libpq5
fi
}
apt_install_with_fallback() {
if ! apt-get install -y --no-install-recommends "$@"; then
rm -f "$APT_PROXY_CONFIG"
apt-get update
apt-get install -y --no-install-recommends "$@"
fi
}
apt_update_with_fallback
apt_install_with_fallback libpq5
rm -rf /var/lib/apt/lists/*
EOF
@@ -108,4 +142,6 @@ USER appuser
EXPOSE 8003
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8003", "--workers", "4"]
ENTRYPOINT ["uvicorn"]
CMD ["main:app", "--host", "0.0.0.0", "--port", "8003", "--workers", "4"]

View File

@@ -8,4 +8,6 @@ The system is designed to help mining companies make informed decisions by simul
## Documentation & quickstart
This repository contains only code. See detailed developer and architecture documentation in the [Docs](https://git.allucanget.biz/allucanget/calminer-docs) repository.
- Detailed developer, architecture, and operations guides live in the companion [calminer-docs](../calminer-docs/) repository. Please see the [README](../calminer-docs/README.md) there for instructions.
- For a local run, create a `.env` (see `.env.example`), install requirements, then execute `python -m scripts.init_db` followed by `uvicorn main:app --reload`. The initializer is safe to rerun and seeds demo data automatically.
- To wipe and recreate the schema in development, run `CALMINER_ENV=development python -m scripts.reset_db` before invoking the initializer again.

112
changelog.md Normal file
View File

@@ -0,0 +1,112 @@
# Changelog
## 2025-11-13
- Completed the UI alignment initiative by consolidating shared form and button styles into `static/css/forms.css` and `static/css/main.css`, introducing the semantic palette in `static/css/theme-default.css`, and spot-checking key pages plus contrast reports.
- Refactored the architecture data model docs by turning `calminer-docs/architecture/08_concepts/02_data_model.md` into a concise overview that links to new detail pages covering SQLAlchemy models, navigation metadata, enumerations, Pydantic schemas, and monitoring tables.
- Nested the calculator navigation under Projects by updating `scripts/init_db.py` seeds, teaching `services/navigation.py` to resolve scenario-scoped hrefs for profitability/opex/capex, and extending sidebar coverage through `tests/integration/test_navigation_sidebar_calculations.py` plus `tests/services/test_navigation_service.py` to validate admin/viewer visibility and contextual URL generation.
- Added navigation sidebar integration coverage by extending `tests/conftest.py` with role-switching headers, seeding admin/viewer test users, and adding `tests/integration/test_navigation_sidebar.py` to assert ordered link rendering for admins, viewer filtering of admin-only entries, and anonymous rejection of the endpoint.
- Finalised the financial data import/export templates by inventorying required fields, defining CSV column specs with validation rules, drafting Excel workbook layouts, documenting end-user workflows in `calminer-docs/userguide/data_import_export.md`, and recording stakeholder review steps alongside updated TODO/DONE tracking.
- Scoped profitability calculator UI under the scenario hierarchy by adding `/calculations/projects/{project_id}/scenarios/{scenario_id}/profitability` GET/POST handlers, updating scenario templates and sidebar navigation to link to the new route, and extending `tests/test_project_scenario_routes.py` with coverage for the scenario path plus legacy redirect behaviour (module run: 14 passed).
- Extended scenario frontend regression coverage by updating `tests/test_project_scenario_routes.py` to assert project/scenario breadcrumbs and calculator navigation, normalising escaped URLs, and re-running the module tests (13 passing).
- Cleared FastAPI and Pydantic deprecation warnings by migrating `scripts/init_db.py` to `@field_validator`, replacing the `main.py` startup hook with a lifespan handler, auditing template response call signatures, confirming HTTP 422 constant usage, and re-running the full pytest suite to ensure a clean warning slate.
- Delivered the capex planner end-to-end: added scaffolded UI in `templates/scenarios/capex.html`, wired GET/POST handlers through `routes/calculations.py`, implemented calculation logic plus snapshot persistence in `services/calculations.py` and `models/capex_snapshot.py`, updated navigation links, and introduced unit tests in `tests/services/test_calculations_capex.py`.
- Updated UI navigation to surface the opex planner by adding the sidebar link in `templates/partials/sidebar_nav.html`, wiring a scenario detail action in `templates/scenarios/detail.html`.
- Completed manual validation of the Capex Planner UI flows (sidebar entry, scenario deep link, validation errors, successful calculation) with results captured in `manual_tests/capex.md`, documented snapshot verification steps, and noted the optional JSON client check for future follow-up.
- Added opex calculation unit tests in `tests/services/test_calculations_opex.py` covering success metrics, currency validation, frequency enforcement, and evaluation horizon extension.
- Documented the Opex Planner workflow in `calminer-docs/userguide/opex_planner.md`, linked it from the user guide index, extended `calminer-docs/architecture/08_concepts/02_data_model.md` with snapshot coverage, and captured the completion in `.github/instructions/DONE.md`.
- Implemented opex integration coverage in `tests/integration/test_opex_calculations.py`, exercising HTML and JSON flows, verifying snapshot persistence, and asserting currency mismatch handling for form and API submissions.
- Executed the full pytest suite with coverage (211 tests) to confirm no regressions or warnings after the opex documentation updates.
- Completed the navigation sidebar API migration by finalising the database-backed service, refactoring `templates/partials/sidebar_nav.html` to consume the endpoint, hydrating via `static/js/navigation_sidebar.js`, and updating HTML route dependencies (`routes/projects.py`, `routes/scenarios.py`, `routes/reports.py`, `routes/imports.py`, `routes/calculations.py`) to use redirect-aware guards so anonymous visitors receive login redirects instead of JSON errors (manual verification via curl across projects, scenarios, reports, and calculations pages).
## 2025-11-12
- Fixed critical 500 error in reporting dashboard by correcting route reference in reporting.html template - changed 'reports.project_list_page' to 'projects.project_list_page' to resolve NoMatchFound error when accessing /ui/reporting.
- Completed navigation validation by inventorying all sidebar navigation links, identifying missing routes for simulations, reporting, settings, themes, and currencies, created new UI routes in routes/ui.py with proper authentication guards, built corresponding templates (simulations.html, reporting.html, settings.html, theme_settings.html, currencies.html), registered the UI router in main.py, updated sidebar navigation to use route names instead of hardcoded URLs, and enhanced navigation.js to use dynamic URL resolution for proper route handling.
- Fixed critical template rendering error in sidebar_nav.html where URL objects from `request.url_for()` were being used with string methods, causing TypeError. Added `|string` filters to convert URL objects to strings for proper template rendering.
- Integrated Plotly charting for interactive visualizations in reporting templates, added chart generation methods to ReportingService (`generate_npv_comparison_chart`, `generate_distribution_histogram`), updated project summary and scenario distribution contexts to include chart JSON data, enhanced templates with chart containers and JavaScript rendering, added chart-container CSS styling, and validated all reporting tests pass.
- Completed local run verification: started application with `uvicorn main:app --reload` without errors, verified authenticated routes (/login, /, /projects/ui, /projects) load correctly with seeded data, and summarized findings for deployment pipeline readiness.
- Fixed docker-compose.override.yml command array to remove duplicate "uvicorn" entry, enabling successful container startup with uvicorn reload in development mode.
- Completed deployment pipeline verification: built Docker image without errors, validated docker-compose configuration, deployed locally with docker-compose (app and postgres containers started successfully), and confirmed application startup logs showing database bootstrap and seeded data initialization.
- Completed documentation of current data models: updated `calminer-docs/architecture/08_concepts/02_data_model.md` with comprehensive SQLAlchemy model schemas, enumerations, Pydantic API schemas, and analysis of discrepancies between models and schemas.
- Switched `models/performance_metric.py` to reuse the shared declarative base from `config.database`, clearing the SQLAlchemy 2.0 `declarative_base` deprecation warning and verifying repository tests still pass.
- Replaced the Alembic migration workflow with the idempotent Pydantic-backed initializer (`scripts/init_db.py`), added a guarded reset utility (`scripts/reset_db.py`), removed migration artifacts/tooling (Alembic directory, config, Docker entrypoint), refreshed the container entrypoint to invoke `uvicorn` directly, and updated installation/architecture docs plus the README to direct developers to the new seeding/reset flow.
- Eliminated Bandit hardcoded-secret findings by replacing literal JWT tokens and passwords across auth/security tests with randomized helpers drawn from `tests/utils/security.py`, ensuring fixtures still assert expected behaviours.
- Centralized Bandit configuration in `pyproject.toml`, reran `bandit -c pyproject.toml -r calminer tests`, and verified the scan now reports zero issues.
- Diagnosed admin bootstrap failure caused by legacy `roles` schema, added Alembic migration `20251112_00_add_roles_metadata_columns.py` to backfill `display_name`, `description`, `created_at`, and `updated_at`, and verified the migration via full pytest run in the activated `.venv`.
- Resolved Ruff E402 warnings by moving module docstrings ahead of `from __future__ import annotations` across currency and pricing service modules, dropped the unused `HTTPException` import in `monitoring/__init__.py`, and confirmed a clean `ruff check .` run.
- Enhanced the deploy job in `.gitea/workflows/cicache.yml` to capture Kubernetes pod, deployment, and container logs into `/logs/deployment/` for staging/production rollouts and publish them via a `deployment-logs` artifact, updating CI/CD documentation with retrieval instructions.
- Fixed CI dashboard template lookup failures by renaming `templates/Dashboard.html` to `templates/dashboard.html` and verifying `tests/test_dashboard_route.py` locally to ensure TemplateNotFound no longer occurs on case-sensitive filesystems.
- Implemented SQLite support as primary local database with environment-driven backend switching (`CALMINER_USE_SQLITE=true`), updated `scripts/init_db.py` for database-agnostic DDL generation (PostgreSQL enums vs SQLite CHECK constraints), tested compatibility with both backends, and verified application startup and seeded data initialization work seamlessly across SQLite and PostgreSQL.
## 2025-11-11
- Collapsed legacy Alembic revisions into `alembic/versions/00_initial.py`, removed superseded migration files, and verified the consolidated schema via SQLite upgrade and Postgres version stamping.
- Implemented base URL routing to redirect unauthenticated users to login and authenticated users to dashboard.
- Added comprehensive end-to-end tests for login flow, including redirects, session handling, and error messaging for invalid/inactive accounts.
- Updated header and footer templates to consistently use `logo_big.png` image instead of text logo, with appropriate CSS styling for sizing.
- Centralised ISO-4217 currency validation across scenarios, imports, and export filters (`models/scenario.py`, `routes/scenarios.py`, `schemas/scenario.py`, `schemas/imports.py`, `services/export_query.py`) so malformed codes are rejected consistently at every entry point.
- Updated scenario services and UI flows to surface friendly validation errors and added regression coverage for imports, exports, API creation, and lifecycle flows ensuring currencies are normalised end-to-end.
- Linked projects to their pricing settings by updating SQLAlchemy models, repositories, seeding utilities, and migrations, and added regression tests to cover the new association and default backfill.
- Bootstrapped database-stored pricing settings at application startup, aligned initial data seeding with the database-first metadata flow, and added tests covering pricing bootstrap creation, project assignment, and idempotency.
- Extended pricing configuration support to prefer persisted metadata via `dependencies.get_pricing_metadata`, added retrieval tests for project/default fallbacks, and refreshed docs (`calminer-docs/specifications/price_calculation.md`, `pricing_settings_data_model.md`) to describe the database-backed workflow and bootstrap behaviour.
- Added `services/financial.py` NPV, IRR, and payback helpers with robust cash-flow normalisation, convergence safeguards, and fractional period support, plus comprehensive pytest coverage exercising representative project scenarios and failure modes.
- Authored `calminer-docs/specifications/financial_metrics.md` capturing DCF assumptions, solver behaviours, and worked examples, and cross-linked the architecture concepts to the new reference for consistent navigation.
- Implemented `services/simulation.py` Monte Carlo engine with configurable distributions, summary aggregation, and reproducible RNG seeding, introduced regression tests in `tests/test_simulation.py`, and documented configuration/usage in `calminer-docs/specifications/monte_carlo_simulation.md` with architecture cross-links.
- Polished reporting HTML contexts by cleaning stray fragments in `routes/reports.py`, adding download action metadata for project and scenario pages, and generating scenario comparison download URLs with correctly serialised repeated `scenario_ids` parameters.
- Consolidated Alembic history into a single initial migration (`20251111_00_initial_schema.py`), removed superseded revision files, and ensured Alembic metadata still references the project metadata for clean bootstrap.
- Added `scripts/run_migrations.py` and a Docker entrypoint wrapper to run Alembic migrations before `uvicorn` starts, removed the fallback `Base.metadata.create_all` call, and updated `calminer-docs/admin/installation.md` so developers know how to apply migrations locally or via Docker.
- Configured pytest defaults to collect coverage (`--cov`) with an 80% fail-under gate, excluded entrypoint/reporting scaffolds from the calculation, updated contributor docs with the standard `pytest` command, and verified the suite now reports 83% coverage.
- Standardized color scheme and typography by moving alert styles to `main.css`, adding typography rules with CSS variables, updating auth templates for consistent button classes, and ensuring all templates use centralized color and spacing variables.
- Improved navigation flow by adding two big chevron buttons on top of the navigation sidebar to allow users to navigate to the previous and next page in the page navigation list, including JavaScript logic for determining current page and handling navigation.
- Established pytest-based unit and integration test suites with coverage thresholds, achieving 83% coverage across 181 tests, with configuration in pyproject.toml and documentation in CONTRIBUTING.md.
- Configured CI pipelines to run tests, linting, and security checks on each change, adding Bandit security scanning to the workflow and verifying execution on pushes and PRs to main/develop branches.
- Added deployment automation with Docker Compose for local development and Kubernetes manifests for production, ensuring environment parity and documenting processes in calminer-docs/admin/installation.md.
- Completed monitoring instrumentation by adding business metrics observation to project and scenario repository operations, and simulation performance tracking to Monte Carlo service with success/error status and duration metrics.
- Updated TODO list to reflect completed monitoring implementation tasks and validated changes with passing simulation tests.
- Implemented comprehensive performance monitoring for scalability (FR-006) with Prometheus metrics collection for HTTP requests, import/export operations, and general application metrics.
- Added database model for persistent metric storage with aggregation endpoints for KPIs like request latency, error rates, and throughput.
- Created FastAPI middleware for automatic request metric collection and background persistence to database.
- Extended monitoring router with performance metrics API endpoints and detailed health checks.
- Added Alembic migration for performance_metrics table and updated model imports.
- Completed concurrent interaction testing implementation, validating database transaction isolation under threading and establishing async testing framework for future concurrency enhancements.
- Implemented comprehensive deployment automation with Docker Compose configurations for development, staging, and production environments ensuring environment parity.
- Set up Kubernetes manifests with resource limits, health checks, and secrets management for production deployment.
- Configured CI/CD workflows for automated Docker image building, registry pushing, and Kubernetes deployment to staging/production environments.
- Documented deployment processes, environment configurations, and CI/CD workflows in project documentation.
- Validated deployment automation through Docker Compose configuration testing and CI/CD pipeline structure.
## 2025-11-10
- Added dedicated pytest coverage for guard dependencies, exercising success plus failure paths (missing session, inactive user, missing roles, project/scenario access errors) via `tests/test_dependencies_guards.py`.
- Added integration tests in `tests/test_authorization_integration.py` verifying anonymous 401 responses, role-based 403s, and authorized project manager flows across API and UI endpoints.
- Implemented environment-driven admin bootstrap settings, wired the `bootstrap_admin` helper into FastAPI startup, added pytest coverage for creation/idempotency/reset logic, and documented operational guidance in the RBAC plan and security concept.
- Retired the legacy authentication RBAC implementation plan document after migrating its guidance into live documentation and synchronized the contributor instructions to reflect the removal.
- Completed the Authentication & RBAC checklist by shipping the new models, migrations, repositories, guard dependencies, and integration tests.
- Documented the project/scenario import/export field mapping and file format guidelines in `calminer-docs/requirements/FR-008.md`, and introduced `schemas/imports.py` with Pydantic models that normalise incoming CSV/Excel rows for projects and scenarios.
- Added `services/importers.py` to load CSV/XLSX files into the new import schemas, pulled in `openpyxl` for Excel support, and covered the parsing behaviour with `tests/test_import_parsing.py`.
- Expanded the import ingestion workflow with staging previews, transactional persistence commits, FastAPI preview/commit endpoints under `/imports`, and new API tests (`tests/test_import_ingestion.py`, `tests/test_import_api.py`) ensuring end-to-end coverage.
- Added persistent audit logging via `ImportExportLog`, structured log emission, Prometheus metrics instrumentation, `/metrics` endpoint exposure, and updated operator/deployment documentation to guide monitoring setup.
## 2025-11-09
- Captured current implementation status, requirements coverage, missing features, and prioritized roadmap in `calminer-docs/implementation_status.md` to guide future development.
- Added core SQLAlchemy domain models, shared metadata descriptors, and Alembic migration setup (with initial schema snapshot) to establish the persistence layer foundation.
- Introduced repository and unit-of-work helpers for projects, scenarios, financial inputs, and simulation parameters to support service-layer operations.
- Added SQLite-backed pytest coverage for repository and unit-of-work behaviours to validate persistence interactions.
- Exposed project and scenario CRUD APIs with validated schemas and integrated them into the FastAPI application.
- Connected project and scenario routers to new Jinja2 list/detail/edit views with HTML forms and redirects.
- Implemented FR-009 client-side enhancements with responsive navigation toggle, mobile-first scenario tables, and shared asset loading across templates.
- Added scenario comparison validator, FastAPI comparison endpoint, and comprehensive unit tests to enforce FR-009 validation rules through API errors.
- Delivered a new dashboard experience with `templates/dashboard.html`, dedicated styling, and a FastAPI route supplying real project/scenario metrics via repository helpers.
- Extended repositories with count/recency utilities and added pytest coverage, including a dashboard rendering smoke test validating empty-state messaging.
- Brought project and scenario detail pages plus their forms in line with the dashboard visuals, adding metric cards, layout grids, and refreshed CTA styles.
- Reordered project route registration to prioritize static UI paths, eliminating 422 errors on `/projects/ui` and `/projects/create`, and added pytest smoke coverage for the navigation endpoints.
- Added end-to-end integration tests for project and scenario lifecycles, validating HTML redirects, template rendering, and API interactions, and updated `ProjectRepository.get` to deduplicate joined loads for detail views.
- Updated all Jinja2 template responses to the new Starlette signature to eliminate deprecation warnings while keeping request-aware context available to the templates.
- Introduced `services/security.py` to centralize Argon2 password hashing utilities and JWT creation/verification with typed payloads, and added pytest coverage for hashing, expiry, tampering, and token type mismatch scenarios.
- Added `routes/auth.py` with registration, login, and password reset flows, refreshed auth templates with error messaging, wired navigation links, and introduced end-to-end pytest coverage for the new forms and token flows.
- Implemented cookie-based authentication session middleware with automatic access token refresh, logout handling, navigation adjustments, and documentation/test updates capturing the new behaviour.
- Delivered idempotent seeding utilities with `scripts/initial_data.py`, entry-point runner `scripts/00_initial_data.py`, documentation updates, and pytest coverage to verify role/admin provisioning.
- Secured project and scenario routers with RBAC guard dependencies, enforced repository access checks via helper utilities, and aligned template routes with FastAPI dependency injection patterns.

1
config/__init__.py Normal file
View File

@@ -0,0 +1 @@
"""Configuration package."""

View File

@@ -11,12 +11,21 @@ def _build_database_url() -> str:
"""Construct the SQLAlchemy database URL from granular environment vars.
Falls back to `DATABASE_URL` for backward compatibility.
Supports SQLite when CALMINER_USE_SQLITE is set.
"""
legacy_url = os.environ.get("DATABASE_URL", "")
if legacy_url and legacy_url.strip() != "":
return legacy_url
use_sqlite = os.environ.get("CALMINER_USE_SQLITE", "").lower() in ("true", "1", "yes")
if use_sqlite:
# Use SQLite database
db_path = os.environ.get("DATABASE_PATH", "./data/calminer.db")
# Ensure the directory exists
os.makedirs(os.path.dirname(db_path), exist_ok=True)
return f"sqlite:///{db_path}"
driver = os.environ.get("DATABASE_DRIVER", "postgresql")
host = os.environ.get("DATABASE_HOST")
port = os.environ.get("DATABASE_PORT", "5432")
@@ -54,7 +63,15 @@ def _build_database_url() -> str:
DATABASE_URL = _build_database_url()
engine = create_engine(DATABASE_URL, echo=True, future=True)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
# Avoid expiring ORM objects on commit so that objects returned from UnitOfWork
# remain usable for the duration of the request cycle without causing
# DetachedInstanceError when accessed after the session commits.
SessionLocal = sessionmaker(
autocommit=False,
autoflush=False,
bind=engine,
expire_on_commit=False,
)
Base = declarative_base()

233
config/settings.py Normal file
View File

@@ -0,0 +1,233 @@
from __future__ import annotations
import os
from dataclasses import dataclass
from datetime import timedelta
from functools import lru_cache
from typing import Optional
from services.pricing import PricingMetadata
from services.security import JWTSettings
@dataclass(frozen=True, slots=True)
class AdminBootstrapSettings:
"""Default administrator bootstrap configuration."""
email: str
username: str
password: str
roles: tuple[str, ...]
force_reset: bool
@dataclass(frozen=True, slots=True)
class SessionSettings:
"""Cookie and header configuration for session token transport."""
access_cookie_name: str
refresh_cookie_name: str
cookie_secure: bool
cookie_domain: Optional[str]
cookie_path: str
header_name: str
header_prefix: str
allow_header_fallback: bool
@dataclass(frozen=True, slots=True)
class Settings:
"""Application configuration sourced from environment variables."""
jwt_secret_key: str = "change-me"
jwt_algorithm: str = "HS256"
jwt_access_token_minutes: int = 15
jwt_refresh_token_days: int = 7
session_access_cookie_name: str = "calminer_access_token"
session_refresh_cookie_name: str = "calminer_refresh_token"
session_cookie_secure: bool = False
session_cookie_domain: Optional[str] = None
session_cookie_path: str = "/"
session_header_name: str = "Authorization"
session_header_prefix: str = "Bearer"
session_allow_header_fallback: bool = True
admin_email: str = "admin@calminer.local"
admin_username: str = "admin"
admin_password: str = "ChangeMe123!"
admin_roles: tuple[str, ...] = ("admin",)
admin_force_reset: bool = False
pricing_default_payable_pct: float = 100.0
pricing_default_currency: str | None = "USD"
pricing_moisture_threshold_pct: float = 8.0
pricing_moisture_penalty_per_pct: float = 0.0
@classmethod
def from_environment(cls) -> "Settings":
"""Construct settings from environment variables."""
return cls(
jwt_secret_key=os.getenv("CALMINER_JWT_SECRET", "change-me"),
jwt_algorithm=os.getenv("CALMINER_JWT_ALGORITHM", "HS256"),
jwt_access_token_minutes=cls._int_from_env(
"CALMINER_JWT_ACCESS_MINUTES", 15
),
jwt_refresh_token_days=cls._int_from_env(
"CALMINER_JWT_REFRESH_DAYS", 7
),
session_access_cookie_name=os.getenv(
"CALMINER_SESSION_ACCESS_COOKIE", "calminer_access_token"
),
session_refresh_cookie_name=os.getenv(
"CALMINER_SESSION_REFRESH_COOKIE", "calminer_refresh_token"
),
session_cookie_secure=cls._bool_from_env(
"CALMINER_SESSION_COOKIE_SECURE", False
),
session_cookie_domain=os.getenv("CALMINER_SESSION_COOKIE_DOMAIN"),
session_cookie_path=os.getenv("CALMINER_SESSION_COOKIE_PATH", "/"),
session_header_name=os.getenv(
"CALMINER_SESSION_HEADER_NAME", "Authorization"
),
session_header_prefix=os.getenv(
"CALMINER_SESSION_HEADER_PREFIX", "Bearer"
),
session_allow_header_fallback=cls._bool_from_env(
"CALMINER_SESSION_ALLOW_HEADER_FALLBACK", True
),
admin_email=os.getenv(
"CALMINER_SEED_ADMIN_EMAIL", "admin@calminer.local"
),
admin_username=os.getenv(
"CALMINER_SEED_ADMIN_USERNAME", "admin"
),
admin_password=os.getenv(
"CALMINER_SEED_ADMIN_PASSWORD", "ChangeMe123!"
),
admin_roles=cls._parse_admin_roles(
os.getenv("CALMINER_SEED_ADMIN_ROLES")
),
admin_force_reset=cls._bool_from_env(
"CALMINER_SEED_FORCE", False
),
pricing_default_payable_pct=cls._float_from_env(
"CALMINER_PRICING_DEFAULT_PAYABLE_PCT", 100.0
),
pricing_default_currency=cls._optional_str(
"CALMINER_PRICING_DEFAULT_CURRENCY", "USD"
),
pricing_moisture_threshold_pct=cls._float_from_env(
"CALMINER_PRICING_MOISTURE_THRESHOLD_PCT", 8.0
),
pricing_moisture_penalty_per_pct=cls._float_from_env(
"CALMINER_PRICING_MOISTURE_PENALTY_PER_PCT", 0.0
),
)
@staticmethod
def _int_from_env(name: str, default: int) -> int:
raw_value = os.getenv(name)
if raw_value is None:
return default
try:
return int(raw_value)
except ValueError:
return default
@staticmethod
def _bool_from_env(name: str, default: bool) -> bool:
raw_value = os.getenv(name)
if raw_value is None:
return default
lowered = raw_value.strip().lower()
if lowered in {"1", "true", "yes", "on"}:
return True
if lowered in {"0", "false", "no", "off"}:
return False
return default
@staticmethod
def _parse_admin_roles(raw_value: str | None) -> tuple[str, ...]:
if not raw_value:
return ("admin",)
parts = [segment.strip()
for segment in raw_value.split(",") if segment.strip()]
if "admin" not in parts:
parts.insert(0, "admin")
seen: set[str] = set()
ordered: list[str] = []
for role_name in parts:
if role_name not in seen:
ordered.append(role_name)
seen.add(role_name)
return tuple(ordered)
@staticmethod
def _float_from_env(name: str, default: float) -> float:
raw_value = os.getenv(name)
if raw_value is None:
return default
try:
return float(raw_value)
except ValueError:
return default
@staticmethod
def _optional_str(name: str, default: str | None = None) -> str | None:
raw_value = os.getenv(name)
if raw_value is None or raw_value.strip() == "":
return default
return raw_value.strip()
def jwt_settings(self) -> JWTSettings:
"""Build runtime JWT settings compatible with token helpers."""
return JWTSettings(
secret_key=self.jwt_secret_key,
algorithm=self.jwt_algorithm,
access_token_ttl=timedelta(minutes=self.jwt_access_token_minutes),
refresh_token_ttl=timedelta(days=self.jwt_refresh_token_days),
)
def session_settings(self) -> SessionSettings:
"""Provide transport configuration for session tokens."""
return SessionSettings(
access_cookie_name=self.session_access_cookie_name,
refresh_cookie_name=self.session_refresh_cookie_name,
cookie_secure=self.session_cookie_secure,
cookie_domain=self.session_cookie_domain,
cookie_path=self.session_cookie_path,
header_name=self.session_header_name,
header_prefix=self.session_header_prefix,
allow_header_fallback=self.session_allow_header_fallback,
)
def admin_bootstrap_settings(self) -> AdminBootstrapSettings:
"""Return configured admin bootstrap settings."""
return AdminBootstrapSettings(
email=self.admin_email,
username=self.admin_username,
password=self.admin_password,
roles=self.admin_roles,
force_reset=self.admin_force_reset,
)
def pricing_metadata(self) -> PricingMetadata:
"""Build pricing metadata defaults."""
return PricingMetadata(
default_payable_pct=self.pricing_default_payable_pct,
default_currency=self.pricing_default_currency,
moisture_threshold_pct=self.pricing_moisture_threshold_pct,
moisture_penalty_per_pct=self.pricing_moisture_penalty_per_pct,
)
@lru_cache(maxsize=1)
def get_settings() -> Settings:
"""Return cached application settings."""
return Settings.from_environment()

400
dependencies.py Normal file
View File

@@ -0,0 +1,400 @@
from __future__ import annotations
from collections.abc import Callable, Iterable, Generator
from fastapi import Depends, HTTPException, Request, status
from config.settings import Settings, get_settings
from models import Project, Role, Scenario, User
from services.authorization import (
ensure_project_access as ensure_project_access_helper,
ensure_scenario_access as ensure_scenario_access_helper,
ensure_scenario_in_project as ensure_scenario_in_project_helper,
)
from services.exceptions import AuthorizationError, EntityNotFoundError
from services.security import JWTSettings
from services.session import (
AuthSession,
SessionStrategy,
SessionTokens,
build_session_strategy,
extract_session_tokens,
)
from services.unit_of_work import UnitOfWork
from services.importers import ImportIngestionService
from services.pricing import PricingMetadata
from services.navigation import NavigationService
from services.scenario_evaluation import ScenarioPricingConfig, ScenarioPricingEvaluator
from services.repositories import pricing_settings_to_metadata
def get_unit_of_work() -> Generator[UnitOfWork, None, None]:
"""FastAPI dependency yielding a unit-of-work instance."""
with UnitOfWork() as uow:
yield uow
_IMPORT_INGESTION_SERVICE = ImportIngestionService(lambda: UnitOfWork())
def get_import_ingestion_service() -> ImportIngestionService:
"""Provide singleton import ingestion service."""
return _IMPORT_INGESTION_SERVICE
def get_application_settings() -> Settings:
"""Provide cached application settings instance."""
return get_settings()
def get_pricing_metadata(
settings: Settings = Depends(get_application_settings),
uow: UnitOfWork = Depends(get_unit_of_work),
) -> PricingMetadata:
"""Return pricing metadata defaults sourced from persisted pricing settings."""
stored = uow.get_pricing_metadata()
if stored is not None:
return stored
fallback = settings.pricing_metadata()
seed_result = uow.ensure_default_pricing_settings(metadata=fallback)
return pricing_settings_to_metadata(seed_result.settings)
def get_navigation_service(
uow: UnitOfWork = Depends(get_unit_of_work),
) -> NavigationService:
if not uow.navigation:
raise RuntimeError("Navigation repository is not initialised")
return NavigationService(uow.navigation)
def get_pricing_evaluator(
metadata: PricingMetadata = Depends(get_pricing_metadata),
) -> ScenarioPricingEvaluator:
"""Provide a configured scenario pricing evaluator."""
return ScenarioPricingEvaluator(ScenarioPricingConfig(metadata=metadata))
def get_jwt_settings() -> JWTSettings:
"""Provide JWT runtime configuration derived from settings."""
return get_settings().jwt_settings()
def get_session_strategy(
settings: Settings = Depends(get_application_settings),
) -> SessionStrategy:
"""Yield configured session transport strategy."""
return build_session_strategy(settings.session_settings())
def get_session_tokens(
request: Request,
strategy: SessionStrategy = Depends(get_session_strategy),
) -> SessionTokens:
"""Extract raw session tokens from the incoming request."""
existing = getattr(request.state, "auth_session", None)
if isinstance(existing, AuthSession):
return existing.tokens
tokens = extract_session_tokens(request, strategy)
request.state.auth_session = AuthSession(tokens=tokens)
return tokens
def get_auth_session(
request: Request,
tokens: SessionTokens = Depends(get_session_tokens),
) -> AuthSession:
"""Provide authentication session context for the current request."""
existing = getattr(request.state, "auth_session", None)
if isinstance(existing, AuthSession):
return existing
if tokens.is_empty:
session = AuthSession.anonymous()
else:
session = AuthSession(tokens=tokens)
request.state.auth_session = session
return session
def get_current_user(
session: AuthSession = Depends(get_auth_session),
) -> User | None:
"""Return the current authenticated user if present."""
return session.user
def require_current_user(
session: AuthSession = Depends(get_auth_session),
) -> User:
"""Ensure that a request is authenticated and return the user context."""
if session.user is None or session.tokens.is_empty:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Authentication required.",
)
return session.user
def require_authenticated_user(
user: User = Depends(require_current_user),
) -> User:
"""Ensure the current user account is active."""
if not user.is_active:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="User account is disabled.",
)
return user
def require_authenticated_user_html(
request: Request,
session: AuthSession = Depends(get_auth_session),
) -> User:
"""HTML-aware authenticated dependency that redirects anonymous sessions."""
user = session.user
if user is None or session.tokens.is_empty:
login_url = str(request.url_for("auth.login_form"))
raise HTTPException(
status_code=status.HTTP_303_SEE_OTHER,
headers={"Location": login_url},
)
if not user.is_active:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="User account is disabled.",
)
return user
def _user_role_names(user: User) -> set[str]:
roles: Iterable[Role] = getattr(user, "roles", []) or []
return {role.name for role in roles}
def require_roles(*roles: str) -> Callable[[User], User]:
"""Dependency factory enforcing membership in one of the given roles."""
required = tuple(role.strip() for role in roles if role.strip())
if not required:
raise ValueError("require_roles requires at least one role name")
def _dependency(user: User = Depends(require_authenticated_user)) -> User:
if user.is_superuser:
return user
role_names = _user_role_names(user)
if not any(role in role_names for role in required):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="Insufficient permissions for this action.",
)
return user
return _dependency
def require_any_role(*roles: str) -> Callable[[User], User]:
"""Alias of require_roles for readability in some contexts."""
return require_roles(*roles)
def require_roles_html(*roles: str) -> Callable[[Request], User]:
"""Ensure user is authenticated for HTML responses; redirect anonymous to login."""
required = tuple(role.strip() for role in roles if role.strip())
if not required:
raise ValueError("require_roles_html requires at least one role name")
def _dependency(
request: Request,
session: AuthSession = Depends(get_auth_session),
) -> User:
user = session.user
if user is None:
login_url = str(request.url_for("auth.login_form"))
raise HTTPException(
status_code=status.HTTP_303_SEE_OTHER,
headers={"Location": login_url},
)
if user.is_superuser:
return user
role_names = _user_role_names(user)
if not any(role in role_names for role in required):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="Insufficient permissions for this action.",
)
return user
return _dependency
def require_any_role_html(*roles: str) -> Callable[[Request], User]:
"""Alias of require_roles_html for readability."""
return require_roles_html(*roles)
def require_project_resource(
*,
require_manage: bool = False,
user_dependency: Callable[..., User] = require_authenticated_user,
) -> Callable[[int], Project]:
"""Dependency factory that resolves a project with authorization checks."""
def _dependency(
project_id: int,
user: User = Depends(user_dependency),
uow: UnitOfWork = Depends(get_unit_of_work),
) -> Project:
try:
return ensure_project_access_helper(
uow,
project_id=project_id,
user=user,
require_manage=require_manage,
)
except EntityNotFoundError as exc:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=str(exc),
) from exc
except AuthorizationError as exc:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail=str(exc),
) from exc
return _dependency
def require_scenario_resource(
*,
require_manage: bool = False,
with_children: bool = False,
user_dependency: Callable[..., User] = require_authenticated_user,
) -> Callable[[int], Scenario]:
"""Dependency factory that resolves a scenario with authorization checks."""
def _dependency(
scenario_id: int,
user: User = Depends(user_dependency),
uow: UnitOfWork = Depends(get_unit_of_work),
) -> Scenario:
try:
return ensure_scenario_access_helper(
uow,
scenario_id=scenario_id,
user=user,
require_manage=require_manage,
with_children=with_children,
)
except EntityNotFoundError as exc:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=str(exc),
) from exc
except AuthorizationError as exc:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail=str(exc),
) from exc
return _dependency
def require_project_scenario_resource(
*,
require_manage: bool = False,
with_children: bool = False,
user_dependency: Callable[..., User] = require_authenticated_user,
) -> Callable[[int, int], Scenario]:
"""Dependency factory ensuring a scenario belongs to the given project and is accessible."""
def _dependency(
project_id: int,
scenario_id: int,
user: User = Depends(user_dependency),
uow: UnitOfWork = Depends(get_unit_of_work),
) -> Scenario:
try:
return ensure_scenario_in_project_helper(
uow,
project_id=project_id,
scenario_id=scenario_id,
user=user,
require_manage=require_manage,
with_children=with_children,
)
except EntityNotFoundError as exc:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=str(exc),
) from exc
except AuthorizationError as exc:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail=str(exc),
) from exc
return _dependency
def require_project_resource_html(
*, require_manage: bool = False
) -> Callable[[int], Project]:
"""HTML-aware project loader that redirects anonymous sessions."""
return require_project_resource(
require_manage=require_manage,
user_dependency=require_authenticated_user_html,
)
def require_scenario_resource_html(
*,
require_manage: bool = False,
with_children: bool = False,
) -> Callable[[int], Scenario]:
"""HTML-aware scenario loader that redirects anonymous sessions."""
return require_scenario_resource(
require_manage=require_manage,
with_children=with_children,
user_dependency=require_authenticated_user_html,
)
def require_project_scenario_resource_html(
*,
require_manage: bool = False,
with_children: bool = False,
) -> Callable[[int, int], Scenario]:
"""HTML-aware project-scenario loader redirecting anonymous sessions."""
return require_project_scenario_resource(
require_manage=require_manage,
with_children=with_children,
user_dependency=require_authenticated_user_html,
)

View File

@@ -0,0 +1,59 @@
version: "3.8"
services:
app:
build:
context: .
dockerfile: Dockerfile
args:
APT_CACHE_URL: ${APT_CACHE_URL:-}
environment:
- ENVIRONMENT=development
- DEBUG=true
- LOG_LEVEL=DEBUG
# Override database to use local postgres service
- DATABASE_HOST=postgres
- DATABASE_PORT=5432
- DATABASE_USER=calminer
- DATABASE_PASSWORD=calminer_password
- DATABASE_NAME=calminer_db
- DATABASE_DRIVER=postgresql
# Development-specific settings
- CALMINER_EXPORT_MAX_ROWS=1000
- CALMINER_IMPORT_MAX_ROWS=10000
volumes:
# Mount source code for live reloading (if using --reload)
- .:/app:ro
# Override logs volume to local for easier access
- ./logs:/app/logs
ports:
- "8003:8003"
# Override command for development with reload
command:
[
"main:app",
"--host",
"0.0.0.0",
"--port",
"8003",
"--reload",
"--workers",
"1",
]
depends_on:
- postgres
restart: unless-stopped
postgres:
environment:
- POSTGRES_USER=calminer
- POSTGRES_PASSWORD=calminer_password
- POSTGRES_DB=calminer_db
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
volumes:
postgres_data:

73
docker-compose.prod.yml Normal file
View File

@@ -0,0 +1,73 @@
version: "3.8"
services:
app:
image: git.allucanget.biz/allucanget/calminer:latest
environment:
- ENVIRONMENT=production
- DEBUG=false
- LOG_LEVEL=WARNING
# Database configuration - must be provided externally
- DATABASE_HOST=${DATABASE_HOST}
- DATABASE_PORT=${DATABASE_PORT:-5432}
- DATABASE_USER=${DATABASE_USER}
- DATABASE_PASSWORD=${DATABASE_PASSWORD}
- DATABASE_NAME=${DATABASE_NAME}
- DATABASE_DRIVER=postgresql
# Production-specific settings
- CALMINER_EXPORT_MAX_ROWS=100000
- CALMINER_IMPORT_MAX_ROWS=100000
- CALMINER_EXPORT_METADATA=true
- CALMINER_IMPORT_STAGING_TTL=3600
ports:
- "8003:8003"
depends_on:
postgres:
condition: service_healthy
restart: unless-stopped
# Production health checks
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8003/health"]
interval: 60s
timeout: 30s
retries: 5
start_period: 60s
# Resource limits for production
deploy:
resources:
limits:
cpus: "1.0"
memory: 1G
reservations:
cpus: "0.5"
memory: 512M
postgres:
environment:
- POSTGRES_USER=${DATABASE_USER}
- POSTGRES_PASSWORD=${DATABASE_PASSWORD}
- POSTGRES_DB=${DATABASE_NAME}
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
# Production postgres health check
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DATABASE_USER} -d ${DATABASE_NAME}"]
interval: 60s
timeout: 30s
retries: 5
start_period: 60s
# Resource limits for postgres
deploy:
resources:
limits:
cpus: "1.0"
memory: 2G
reservations:
cpus: "0.5"
memory: 1G
volumes:
postgres_data:

View File

@@ -0,0 +1,62 @@
version: "3.8"
services:
app:
build:
context: .
dockerfile: Dockerfile
args:
APT_CACHE_URL: ${APT_CACHE_URL:-}
environment:
- ENVIRONMENT=staging
- DEBUG=false
- LOG_LEVEL=INFO
# Database configuration - can be overridden by external env
- DATABASE_HOST=${DATABASE_HOST:-postgres}
- DATABASE_PORT=${DATABASE_PORT:-5432}
- DATABASE_USER=${DATABASE_USER:-calminer}
- DATABASE_PASSWORD=${DATABASE_PASSWORD}
- DATABASE_NAME=${DATABASE_NAME:-calminer_db}
- DATABASE_DRIVER=postgresql
# Staging-specific settings
- CALMINER_EXPORT_MAX_ROWS=50000
- CALMINER_IMPORT_MAX_ROWS=50000
- CALMINER_EXPORT_METADATA=true
- CALMINER_IMPORT_STAGING_TTL=600
ports:
- "8003:8003"
depends_on:
- postgres
restart: unless-stopped
# Health check for staging
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8003/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
postgres:
environment:
- POSTGRES_USER=${DATABASE_USER:-calminer}
- POSTGRES_PASSWORD=${DATABASE_PASSWORD}
- POSTGRES_DB=${DATABASE_NAME:-calminer_db}
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
# Health check for postgres
healthcheck:
test:
[
"CMD-SHELL",
"pg_isready -U ${DATABASE_USER:-calminer} -d ${DATABASE_NAME:-calminer_db}",
]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
postgres_data:

View File

@@ -8,11 +8,13 @@ services:
ports:
- "8003:8003"
environment:
- DATABASE_HOST=postgres
- DATABASE_PORT=5432
- DATABASE_USER=calminer
- DATABASE_PASSWORD=calminer_password
- DATABASE_NAME=calminer_db
# Environment-specific variables should be set in override files
- ENVIRONMENT=${ENVIRONMENT:-production}
- DATABASE_HOST=${DATABASE_HOST:-postgres}
- DATABASE_PORT=${DATABASE_PORT:-5432}
- DATABASE_USER=${DATABASE_USER}
- DATABASE_PASSWORD=${DATABASE_PASSWORD}
- DATABASE_NAME=${DATABASE_NAME}
- DATABASE_DRIVER=postgresql
depends_on:
- postgres
@@ -23,9 +25,9 @@ services:
postgres:
image: postgres:17
environment:
- POSTGRES_USER=calminer
- POSTGRES_PASSWORD=calminer_password
- POSTGRES_DB=calminer_db
- POSTGRES_USER=${DATABASE_USER}
- POSTGRES_PASSWORD=${DATABASE_PASSWORD}
- POSTGRES_DB=${DATABASE_NAME}
ports:
- "5432:5432"
volumes:

14
k8s/configmap.yaml Normal file
View File

@@ -0,0 +1,14 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: calminer-config
data:
DATABASE_HOST: "calminer-db"
DATABASE_PORT: "5432"
DATABASE_USER: "calminer"
DATABASE_NAME: "calminer_db"
DATABASE_DRIVER: "postgresql"
CALMINER_EXPORT_MAX_ROWS: "10000"
CALMINER_EXPORT_METADATA: "true"
CALMINER_IMPORT_STAGING_TTL: "300"
CALMINER_IMPORT_MAX_ROWS: "50000"

54
k8s/deployment.yaml Normal file
View File

@@ -0,0 +1,54 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: calminer-app
labels:
app: calminer
spec:
replicas: 3
selector:
matchLabels:
app: calminer
template:
metadata:
labels:
app: calminer
spec:
containers:
- name: calminer
image: registry.example.com/calminer:latest
ports:
- containerPort: 8003
envFrom:
- configMapRef:
name: calminer-config
- secretRef:
name: calminer-secrets
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8003
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 8003
initialDelaySeconds: 5
periodSeconds: 5
initContainers:
- name: wait-for-db
image: postgres:17
command:
[
"sh",
"-c",
"until pg_isready -h calminer-db -p 5432; do echo waiting for database; sleep 2; done;",
]

18
k8s/ingress.yaml Normal file
View File

@@ -0,0 +1,18 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: calminer-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: calminer.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: calminer-service
port:
number: 80

13
k8s/postgres-service.yaml Normal file
View File

@@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: calminer-db
labels:
app: calminer-db
spec:
selector:
app: calminer-db
ports:
- port: 5432
targetPort: 5432
clusterIP: None # Headless service for StatefulSet

48
k8s/postgres.yaml Normal file
View File

@@ -0,0 +1,48 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: calminer-db
spec:
serviceName: calminer-db
replicas: 1
selector:
matchLabels:
app: calminer-db
template:
metadata:
labels:
app: calminer-db
spec:
containers:
- name: postgres
image: postgres:17
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
value: "calminer"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: calminer-secrets
key: DATABASE_PASSWORD
- name: POSTGRES_DB
value: "calminer_db"
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: postgres-storage
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi

8
k8s/secret.yaml Normal file
View File

@@ -0,0 +1,8 @@
apiVersion: v1
kind: Secret
metadata:
name: calminer-secrets
type: Opaque
data:
DATABASE_PASSWORD: Y2FsbWluZXJfcGFzc3dvcmQ= # base64 encoded 'calminer_password'
CALMINER_SEED_ADMIN_PASSWORD: Q2hhbmdlTWUxMjMh # base64 encoded 'ChangeMe123!'

14
k8s/service.yaml Normal file
View File

@@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: calminer-service
labels:
app: calminer
spec:
selector:
app: calminer
ports:
- port: 80
targetPort: 8003
protocol: TCP
type: ClusterIP

125
main.py
View File

@@ -1,28 +1,88 @@
from routes.distributions import router as distributions_router
from routes.ui import router as ui_router
from routes.parameters import router as parameters_router
import logging
from contextlib import asynccontextmanager
from typing import Awaitable, Callable
from fastapi import FastAPI, Request, Response
from fastapi.staticfiles import StaticFiles
from fastapi.responses import FileResponse
from config.settings import get_settings
from middleware.auth_session import AuthSessionMiddleware
from middleware.metrics import MetricsMiddleware
from middleware.validation import validate_json
from config.database import Base, engine
from routes.auth import router as auth_router
from routes.dashboard import router as dashboard_router
from routes.calculations import router as calculations_router
from routes.imports import router as imports_router
from routes.exports import router as exports_router
from routes.projects import router as projects_router
from routes.reports import router as reports_router
from routes.scenarios import router as scenarios_router
from routes.costs import router as costs_router
from routes.consumption import router as consumption_router
from routes.production import router as production_router
from routes.equipment import router as equipment_router
from routes.reporting import router as reporting_router
from routes.currencies import router as currencies_router
from routes.simulations import router as simulations_router
from routes.maintenance import router as maintenance_router
from routes.settings import router as settings_router
from routes.users import router as users_router
from routes.ui import router as ui_router
from routes.navigation import router as navigation_router
from monitoring import router as monitoring_router
from services.bootstrap import bootstrap_admin, bootstrap_pricing_settings
from scripts.init_db import init_db as init_db_script
# Initialize database schema
Base.metadata.create_all(bind=engine)
logger = logging.getLogger(__name__)
app = FastAPI()
async def _bootstrap_startup() -> None:
settings = get_settings()
admin_settings = settings.admin_bootstrap_settings()
pricing_metadata = settings.pricing_metadata()
try:
try:
init_db_script()
except Exception:
logger.exception(
"DB initializer failed; continuing to bootstrap (non-fatal)")
role_result, admin_result = bootstrap_admin(settings=admin_settings)
pricing_result = bootstrap_pricing_settings(metadata=pricing_metadata)
logger.info(
"Admin bootstrap completed: roles=%s created=%s updated=%s rotated=%s assigned=%s",
role_result.ensured,
admin_result.created_user,
admin_result.updated_user,
admin_result.password_rotated,
admin_result.roles_granted,
)
try:
seed = pricing_result.seed
slug = getattr(seed.settings, "slug", None) if seed and getattr(
seed, "settings", None) else None
created = getattr(seed, "created", None)
updated_fields = getattr(seed, "updated_fields", None)
impurity_upserts = getattr(seed, "impurity_upserts", None)
logger.info(
"Pricing settings bootstrap completed: slug=%s created=%s updated_fields=%s impurity_upserts=%s projects_assigned=%s",
slug,
created,
updated_fields,
impurity_upserts,
pricing_result.projects_assigned,
)
except Exception:
logger.info(
"Pricing settings bootstrap completed (partial): projects_assigned=%s",
pricing_result.projects_assigned,
)
except Exception: # pragma: no cover - defensive logging
logger.exception(
"Failed to bootstrap administrator or pricing settings")
@asynccontextmanager
async def app_lifespan(_: FastAPI):
await _bootstrap_startup()
yield
app = FastAPI(lifespan=app_lifespan)
app.add_middleware(AuthSessionMiddleware)
app.add_middleware(MetricsMiddleware)
@app.middleware("http")
@@ -37,20 +97,23 @@ async def health() -> dict[str, str]:
return {"status": "ok"}
app.mount("/static", StaticFiles(directory="static"), name="static")
@app.get("/favicon.ico", include_in_schema=False)
async def favicon() -> Response:
static_directory = "static"
favicon_img = "favicon.ico"
return FileResponse(f"{static_directory}/{favicon_img}")
# Include API routers
app.include_router(dashboard_router)
app.include_router(calculations_router)
app.include_router(auth_router)
app.include_router(imports_router)
app.include_router(exports_router)
app.include_router(projects_router)
app.include_router(scenarios_router)
app.include_router(parameters_router)
app.include_router(distributions_router)
app.include_router(costs_router)
app.include_router(consumption_router)
app.include_router(simulations_router)
app.include_router(production_router)
app.include_router(equipment_router)
app.include_router(maintenance_router)
app.include_router(reporting_router)
app.include_router(currencies_router)
app.include_router(settings_router)
app.include_router(reports_router)
app.include_router(ui_router)
app.include_router(users_router)
app.include_router(monitoring_router)
app.include_router(navigation_router)
app.mount("/static", StaticFiles(directory="static"), name="static")

218
middleware/auth_session.py Normal file
View File

@@ -0,0 +1,218 @@
from __future__ import annotations
from dataclasses import dataclass
from typing import Callable, Iterable, Optional
from fastapi import Request, Response
from starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint
from starlette.types import ASGIApp
from config.settings import Settings, get_settings
from sqlalchemy.orm.exc import DetachedInstanceError
from models import User
from monitoring.metrics import ACTIVE_CONNECTIONS
from services.exceptions import EntityNotFoundError
from services.security import (
JWTSettings,
TokenDecodeError,
TokenError,
TokenExpiredError,
TokenTypeMismatchError,
create_access_token,
create_refresh_token,
decode_access_token,
decode_refresh_token,
)
from services.session import (
AuthSession,
SessionStrategy,
SessionTokens,
build_session_strategy,
clear_session_cookies,
extract_session_tokens,
set_session_cookies,
)
from services.unit_of_work import UnitOfWork
_AUTH_SCOPE = "auth"
@dataclass(slots=True)
class _ResolutionResult:
session: AuthSession
strategy: SessionStrategy
jwt_settings: JWTSettings
class AuthSessionMiddleware(BaseHTTPMiddleware):
"""Resolve authenticated users from session cookies and refresh tokens."""
_active_sessions: int = 0
def __init__(
self,
app: ASGIApp,
*,
settings_provider: Callable[[], Settings] = get_settings,
unit_of_work_factory: Callable[[], UnitOfWork] = UnitOfWork,
refresh_scopes: Iterable[str] | None = None,
) -> None:
super().__init__(app)
self._settings_provider = settings_provider
self._unit_of_work_factory = unit_of_work_factory
self._refresh_scopes = tuple(
refresh_scopes) if refresh_scopes else (_AUTH_SCOPE,)
async def dispatch(self, request: Request, call_next: RequestResponseEndpoint) -> Response:
resolved = self._resolve_session(request)
# Track active sessions for authenticated users
try:
user_active = bool(resolved.session.user and getattr(
resolved.session.user, "is_active", False))
except DetachedInstanceError:
user_active = False
if user_active:
AuthSessionMiddleware._active_sessions += 1
ACTIVE_CONNECTIONS.set(AuthSessionMiddleware._active_sessions)
response: Response | None = None
try:
response = await call_next(request)
return response
finally:
# Always decrement the active sessions counter if we incremented it.
if user_active:
AuthSessionMiddleware._active_sessions = max(
0, AuthSessionMiddleware._active_sessions - 1)
ACTIVE_CONNECTIONS.set(AuthSessionMiddleware._active_sessions)
# Only apply session cookies if a response was produced by downstream
# application. If an exception occurred before a response was created
# we avoid raising another error here.
import logging
if response is not None:
try:
self._apply_session(response, resolved)
except Exception:
logging.getLogger(__name__).exception(
"Failed to apply session cookies to response"
)
else:
logging.getLogger(__name__).debug(
"AuthSessionMiddleware: no response produced by downstream app (response is None)"
)
def _resolve_session(self, request: Request) -> _ResolutionResult:
settings = self._settings_provider()
jwt_settings = settings.jwt_settings()
strategy = build_session_strategy(settings.session_settings())
tokens = extract_session_tokens(request, strategy)
session = AuthSession(tokens=tokens)
request.state.auth_session = session
if tokens.access_token:
if self._try_access_token(session, tokens, jwt_settings):
return _ResolutionResult(session=session, strategy=strategy, jwt_settings=jwt_settings)
if tokens.refresh_token:
self._try_refresh_token(
session, tokens.refresh_token, jwt_settings)
return _ResolutionResult(session=session, strategy=strategy, jwt_settings=jwt_settings)
def _try_access_token(
self,
session: AuthSession,
tokens: SessionTokens,
jwt_settings: JWTSettings,
) -> bool:
try:
payload = decode_access_token(
tokens.access_token or "", jwt_settings)
except TokenExpiredError:
return False
except (TokenDecodeError, TokenTypeMismatchError, TokenError):
session.mark_cleared()
return False
user = self._load_user(payload.sub)
if not user or not user.is_active or _AUTH_SCOPE not in payload.scopes:
session.mark_cleared()
return False
session.user = user
session.scopes = tuple(payload.scopes)
session.set_role_slugs(role.name for role in getattr(user, "roles", []) if role)
return True
def _try_refresh_token(
self,
session: AuthSession,
refresh_token: str,
jwt_settings: JWTSettings,
) -> None:
try:
payload = decode_refresh_token(refresh_token, jwt_settings)
except (TokenExpiredError, TokenDecodeError, TokenTypeMismatchError, TokenError):
session.mark_cleared()
return
user = self._load_user(payload.sub)
if not user or not user.is_active or not self._is_refresh_scope_allowed(payload.scopes):
session.mark_cleared()
return
session.user = user
session.scopes = tuple(payload.scopes)
session.set_role_slugs(role.name for role in getattr(user, "roles", []) if role)
access_token = create_access_token(
str(user.id),
jwt_settings,
scopes=payload.scopes,
)
new_refresh = create_refresh_token(
str(user.id),
jwt_settings,
scopes=payload.scopes,
)
session.issue_tokens(access_token=access_token,
refresh_token=new_refresh)
def _is_refresh_scope_allowed(self, scopes: Iterable[str]) -> bool:
candidate_scopes = set(scopes)
return any(scope in candidate_scopes for scope in self._refresh_scopes)
def _load_user(self, subject: str) -> Optional[User]:
try:
user_id = int(subject)
except ValueError:
return None
with self._unit_of_work_factory() as uow:
if not uow.users:
return None
try:
user = uow.users.get(user_id, with_roles=True)
except EntityNotFoundError:
return None
return user
def _apply_session(self, response: Response, resolved: _ResolutionResult) -> None:
session = resolved.session
if session.clear_cookies:
clear_session_cookies(response, resolved.strategy)
return
if session.issued_access_token:
refresh_token = session.issued_refresh_token or session.tokens.refresh_token
set_session_cookies(
response,
access_token=session.issued_access_token,
refresh_token=refresh_token,
strategy=resolved.strategy,
jwt_settings=resolved.jwt_settings,
)

58
middleware/metrics.py Normal file
View File

@@ -0,0 +1,58 @@
from __future__ import annotations
import time
from typing import Callable
from fastapi import Request, Response
from starlette.middleware.base import BaseHTTPMiddleware
from monitoring.metrics import observe_request
from services.metrics import get_metrics_service
class MetricsMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request: Request, call_next: Callable[[Request], Response]) -> Response:
start_time = time.time()
response = await call_next(request)
process_time = time.time() - start_time
observe_request(
method=request.method,
endpoint=request.url.path,
status=response.status_code,
seconds=process_time,
)
# Store in database asynchronously
background_tasks = getattr(request.state, "background_tasks", None)
if background_tasks:
background_tasks.add_task(
store_request_metric,
method=request.method,
endpoint=request.url.path,
status_code=response.status_code,
duration_seconds=process_time,
)
return response
async def store_request_metric(
method: str, endpoint: str, status_code: int, duration_seconds: float
) -> None:
"""Store request metric in database."""
try:
service = get_metrics_service()
service.store_metric(
metric_name="http_request",
value=duration_seconds,
labels={"method": method, "endpoint": endpoint,
"status": status_code},
endpoint=endpoint,
method=method,
status_code=status_code,
duration_seconds=duration_seconds,
)
except Exception:
# Log error but don't fail the request
pass

View File

@@ -10,10 +10,14 @@ async def validate_json(
) -> Response:
# Only validate JSON for requests with a body
if request.method in ("POST", "PUT", "PATCH"):
# Only attempt JSON parsing when the client indicates a JSON content type.
content_type = (request.headers.get("content-type") or "").lower()
if "json" in content_type:
try:
# attempt to parse json body
await request.json()
except Exception:
raise HTTPException(status_code=400, detail="Invalid JSON payload")
raise HTTPException(
status_code=400, detail="Invalid JSON payload")
response = await call_next(request)
return response

72
models/__init__.py Normal file
View File

@@ -0,0 +1,72 @@
"""Database models and shared metadata for the CalMiner domain."""
from .financial_input import FinancialInput
from .metadata import (
COST_BUCKET_METADATA,
RESOURCE_METADATA,
STOCHASTIC_VARIABLE_METADATA,
ResourceDescriptor,
StochasticVariableDescriptor,
)
from .performance_metric import PerformanceMetric
from .pricing_settings import (
PricingImpuritySettings,
PricingMetalSettings,
PricingSettings,
)
from .enums import (
CostBucket,
DistributionType,
FinancialCategory,
MiningOperationType,
ResourceType,
ScenarioStatus,
StochasticVariable,
)
from .project import Project
from .scenario import Scenario
from .simulation_parameter import SimulationParameter
from .user import Role, User, UserRole, password_context
from .navigation import NavigationGroup, NavigationLink
from .profitability_snapshot import ProjectProfitability, ScenarioProfitability
from .capex_snapshot import ProjectCapexSnapshot, ScenarioCapexSnapshot
from .opex_snapshot import (
ProjectOpexSnapshot,
ScenarioOpexSnapshot,
)
__all__ = [
"FinancialCategory",
"FinancialInput",
"MiningOperationType",
"Project",
"ProjectProfitability",
"ProjectCapexSnapshot",
"ProjectOpexSnapshot",
"PricingSettings",
"PricingMetalSettings",
"PricingImpuritySettings",
"Scenario",
"ScenarioProfitability",
"ScenarioCapexSnapshot",
"ScenarioOpexSnapshot",
"ScenarioStatus",
"DistributionType",
"SimulationParameter",
"ResourceType",
"CostBucket",
"StochasticVariable",
"RESOURCE_METADATA",
"COST_BUCKET_METADATA",
"STOCHASTIC_VARIABLE_METADATA",
"ResourceDescriptor",
"StochasticVariableDescriptor",
"User",
"Role",
"UserRole",
"password_context",
"PerformanceMetric",
"NavigationGroup",
"NavigationLink",
]

111
models/capex_snapshot.py Normal file
View File

@@ -0,0 +1,111 @@
from __future__ import annotations
from datetime import datetime
from typing import TYPE_CHECKING
from sqlalchemy import JSON, DateTime, ForeignKey, Integer, Numeric, String
from sqlalchemy.orm import Mapped, mapped_column, relationship
from sqlalchemy.sql import func
from config.database import Base
if TYPE_CHECKING: # pragma: no cover
from .project import Project
from .scenario import Scenario
from .user import User
class ProjectCapexSnapshot(Base):
"""Snapshot of aggregated capex metrics at the project level."""
__tablename__ = "project_capex_snapshots"
id: Mapped[int] = mapped_column(Integer, primary_key=True)
project_id: Mapped[int] = mapped_column(
ForeignKey("projects.id", ondelete="CASCADE"), nullable=False, index=True
)
created_by_id: Mapped[int | None] = mapped_column(
ForeignKey("users.id", ondelete="SET NULL"), nullable=True, index=True
)
calculation_source: Mapped[str | None] = mapped_column(
String(64), nullable=True)
calculated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
currency_code: Mapped[str | None] = mapped_column(String(3), nullable=True)
total_capex: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True)
contingency_pct: Mapped[float | None] = mapped_column(
Numeric(12, 6), nullable=True)
contingency_amount: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True)
total_with_contingency: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True)
component_count: Mapped[int | None] = mapped_column(Integer, nullable=True)
payload: Mapped[dict | None] = mapped_column(JSON, nullable=True)
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
updated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now(), onupdate=func.now()
)
project: Mapped[Project] = relationship(
"Project", back_populates="capex_snapshots"
)
created_by: Mapped[User | None] = relationship("User")
def __repr__(self) -> str: # pragma: no cover
return (
"ProjectCapexSnapshot(id={id!r}, project_id={project_id!r}, total_capex={total_capex!r})".format(
id=self.id, project_id=self.project_id, total_capex=self.total_capex
)
)
class ScenarioCapexSnapshot(Base):
"""Snapshot of capex metrics for an individual scenario."""
__tablename__ = "scenario_capex_snapshots"
id: Mapped[int] = mapped_column(Integer, primary_key=True)
scenario_id: Mapped[int] = mapped_column(
ForeignKey("scenarios.id", ondelete="CASCADE"), nullable=False, index=True
)
created_by_id: Mapped[int | None] = mapped_column(
ForeignKey("users.id", ondelete="SET NULL"), nullable=True, index=True
)
calculation_source: Mapped[str | None] = mapped_column(
String(64), nullable=True)
calculated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
currency_code: Mapped[str | None] = mapped_column(String(3), nullable=True)
total_capex: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True)
contingency_pct: Mapped[float | None] = mapped_column(
Numeric(12, 6), nullable=True)
contingency_amount: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True)
total_with_contingency: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True)
component_count: Mapped[int | None] = mapped_column(Integer, nullable=True)
payload: Mapped[dict | None] = mapped_column(JSON, nullable=True)
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
updated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now(), onupdate=func.now()
)
scenario: Mapped[Scenario] = relationship(
"Scenario", back_populates="capex_snapshots"
)
created_by: Mapped[User | None] = relationship("User")
def __repr__(self) -> str: # pragma: no cover
return (
"ScenarioCapexSnapshot(id={id!r}, scenario_id={scenario_id!r}, total_capex={total_capex!r})".format(
id=self.id, scenario_id=self.scenario_id, total_capex=self.total_capex
)
)

96
models/enums.py Normal file
View File

@@ -0,0 +1,96 @@
from __future__ import annotations
from enum import Enum
from typing import Type
from sqlalchemy import Enum as SQLEnum
def sql_enum(enum_cls: Type[Enum], *, name: str) -> SQLEnum:
"""Build a SQLAlchemy Enum that maps using the enum member values."""
return SQLEnum(
enum_cls,
name=name,
create_type=False,
validate_strings=True,
values_callable=lambda enum_cls: [member.value for member in enum_cls],
)
class MiningOperationType(str, Enum):
"""Supported mining operation categories."""
OPEN_PIT = "open_pit"
UNDERGROUND = "underground"
IN_SITU_LEACH = "in_situ_leach"
PLACER = "placer"
QUARRY = "quarry"
MOUNTAINTOP_REMOVAL = "mountaintop_removal"
OTHER = "other"
class ScenarioStatus(str, Enum):
"""Lifecycle states for project scenarios."""
DRAFT = "draft"
ACTIVE = "active"
ARCHIVED = "archived"
class FinancialCategory(str, Enum):
"""Enumeration of cost and revenue classifications."""
CAPITAL_EXPENDITURE = "capex"
OPERATING_EXPENDITURE = "opex"
REVENUE = "revenue"
CONTINGENCY = "contingency"
OTHER = "other"
class DistributionType(str, Enum):
"""Supported stochastic distribution families for simulations."""
NORMAL = "normal"
TRIANGULAR = "triangular"
UNIFORM = "uniform"
LOGNORMAL = "lognormal"
CUSTOM = "custom"
class ResourceType(str, Enum):
"""Primary consumables and resources used in mining operations."""
DIESEL = "diesel"
ELECTRICITY = "electricity"
WATER = "water"
EXPLOSIVES = "explosives"
REAGENTS = "reagents"
LABOR = "labor"
EQUIPMENT_HOURS = "equipment_hours"
TAILINGS_CAPACITY = "tailings_capacity"
class CostBucket(str, Enum):
"""Granular cost buckets aligned with project accounting."""
CAPITAL_INITIAL = "capital_initial"
CAPITAL_SUSTAINING = "capital_sustaining"
OPERATING_FIXED = "operating_fixed"
OPERATING_VARIABLE = "operating_variable"
MAINTENANCE = "maintenance"
RECLAMATION = "reclamation"
ROYALTIES = "royalties"
GENERAL_ADMIN = "general_admin"
class StochasticVariable(str, Enum):
"""Domain variables that typically require probabilistic modelling."""
ORE_GRADE = "ore_grade"
RECOVERY_RATE = "recovery_rate"
METAL_PRICE = "metal_price"
OPERATING_COST = "operating_cost"
CAPITAL_COST = "capital_cost"
DISCOUNT_RATE = "discount_rate"
THROUGHPUT = "throughput"

62
models/financial_input.py Normal file
View File

@@ -0,0 +1,62 @@
from __future__ import annotations
from datetime import date, datetime
from typing import TYPE_CHECKING
from sqlalchemy import (
Date,
DateTime,
ForeignKey,
Integer,
Numeric,
String,
Text,
)
from sqlalchemy.orm import Mapped, mapped_column, relationship, validates
from sqlalchemy.sql import func
from config.database import Base
from .enums import CostBucket, FinancialCategory, sql_enum
from services.currency import normalise_currency
if TYPE_CHECKING: # pragma: no cover
from .scenario import Scenario
class FinancialInput(Base):
"""Line-item financial assumption attached to a scenario."""
__tablename__ = "financial_inputs"
id: Mapped[int] = mapped_column(Integer, primary_key=True)
scenario_id: Mapped[int] = mapped_column(
ForeignKey("scenarios.id", ondelete="CASCADE"), nullable=False, index=True
)
name: Mapped[str] = mapped_column(String(255), nullable=False)
category: Mapped[FinancialCategory] = mapped_column(
sql_enum(FinancialCategory, name="financialcategory"), nullable=False
)
cost_bucket: Mapped[CostBucket | None] = mapped_column(
sql_enum(CostBucket, name="costbucket"), nullable=True
)
amount: Mapped[float] = mapped_column(Numeric(18, 2), nullable=False)
currency: Mapped[str | None] = mapped_column(String(3), nullable=True)
effective_date: Mapped[date | None] = mapped_column(Date, nullable=True)
notes: Mapped[str | None] = mapped_column(Text, nullable=True)
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
updated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now(), onupdate=func.now()
)
scenario: Mapped["Scenario"] = relationship(
"Scenario", back_populates="financial_inputs")
@validates("currency")
def _validate_currency(self, key: str, value: str | None) -> str | None:
return normalise_currency(value)
def __repr__(self) -> str: # pragma: no cover
return f"FinancialInput(id={self.id!r}, scenario_id={self.scenario_id!r}, name={self.name!r})"

View File

@@ -0,0 +1,31 @@
from __future__ import annotations
from sqlalchemy import Column, DateTime, ForeignKey, Integer, String, Text
from sqlalchemy.sql import func
from config.database import Base
class ImportExportLog(Base):
"""Audit log for import and export operations."""
__tablename__ = "import_export_logs"
id = Column(Integer, primary_key=True, index=True)
action = Column(String(32), nullable=False) # preview, commit, export
dataset = Column(String(32), nullable=False) # projects, scenarios, etc.
status = Column(String(16), nullable=False) # success, failure
filename = Column(String(255), nullable=True)
row_count = Column(Integer, nullable=True)
detail = Column(Text, nullable=True)
user_id = Column(Integer, ForeignKey("users.id"), nullable=True)
created_at = Column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
def __repr__(self) -> str: # pragma: no cover
return (
f"ImportExportLog(id={self.id}, action={self.action}, "
f"dataset={self.dataset}, status={self.status})"
)

108
models/metadata.py Normal file
View File

@@ -0,0 +1,108 @@
from __future__ import annotations
from dataclasses import dataclass
from .enums import ResourceType, CostBucket, StochasticVariable
@dataclass(frozen=True)
class ResourceDescriptor:
"""Describes canonical metadata for a resource type."""
unit: str
description: str
RESOURCE_METADATA: dict[ResourceType, ResourceDescriptor] = {
ResourceType.DIESEL: ResourceDescriptor(unit="L", description="Diesel fuel consumption"),
ResourceType.ELECTRICITY: ResourceDescriptor(unit="kWh", description="Electrical power usage"),
ResourceType.WATER: ResourceDescriptor(unit="m3", description="Process and dust suppression water"),
ResourceType.EXPLOSIVES: ResourceDescriptor(unit="kg", description="Blasting agent consumption"),
ResourceType.REAGENTS: ResourceDescriptor(unit="kg", description="Processing reagents"),
ResourceType.LABOR: ResourceDescriptor(unit="hours", description="Direct labor hours"),
ResourceType.EQUIPMENT_HOURS: ResourceDescriptor(unit="hours", description="Mobile equipment operating hours"),
ResourceType.TAILINGS_CAPACITY: ResourceDescriptor(unit="m3", description="Tailings storage usage"),
}
@dataclass(frozen=True)
class CostBucketDescriptor:
"""Describes reporting label and guidance for a cost bucket."""
label: str
description: str
COST_BUCKET_METADATA: dict[CostBucket, CostBucketDescriptor] = {
CostBucket.CAPITAL_INITIAL: CostBucketDescriptor(
label="Initial Capital",
description="Pre-production capital required to construct the mine",
),
CostBucket.CAPITAL_SUSTAINING: CostBucketDescriptor(
label="Sustaining Capital",
description="Ongoing capital investments to maintain operations",
),
CostBucket.OPERATING_FIXED: CostBucketDescriptor(
label="Fixed Operating",
description="Fixed operating costs independent of production rate",
),
CostBucket.OPERATING_VARIABLE: CostBucketDescriptor(
label="Variable Operating",
description="Costs that scale with throughput or production",
),
CostBucket.MAINTENANCE: CostBucketDescriptor(
label="Maintenance",
description="Maintenance and repair expenditures",
),
CostBucket.RECLAMATION: CostBucketDescriptor(
label="Reclamation",
description="Mine closure and reclamation liabilities",
),
CostBucket.ROYALTIES: CostBucketDescriptor(
label="Royalties",
description="Royalty and streaming obligations",
),
CostBucket.GENERAL_ADMIN: CostBucketDescriptor(
label="G&A",
description="Corporate and site general and administrative costs",
),
}
@dataclass(frozen=True)
class StochasticVariableDescriptor:
"""Metadata describing how a stochastic variable is typically modelled."""
unit: str
description: str
STOCHASTIC_VARIABLE_METADATA: dict[StochasticVariable, StochasticVariableDescriptor] = {
StochasticVariable.ORE_GRADE: StochasticVariableDescriptor(
unit="g/t",
description="Head grade variability across the ore body",
),
StochasticVariable.RECOVERY_RATE: StochasticVariableDescriptor(
unit="%",
description="Metallurgical recovery uncertainty",
),
StochasticVariable.METAL_PRICE: StochasticVariableDescriptor(
unit="$/unit",
description="Commodity price fluctuations",
),
StochasticVariable.OPERATING_COST: StochasticVariableDescriptor(
unit="$/t",
description="Operating cost per tonne volatility",
),
StochasticVariable.CAPITAL_COST: StochasticVariableDescriptor(
unit="$",
description="Capital cost overrun/underrun potential",
),
StochasticVariable.DISCOUNT_RATE: StochasticVariableDescriptor(
unit="%",
description="Discount rate sensitivity",
),
StochasticVariable.THROUGHPUT: StochasticVariableDescriptor(
unit="t/d",
description="Plant throughput variability",
),
}

125
models/navigation.py Normal file
View File

@@ -0,0 +1,125 @@
from __future__ import annotations
from datetime import datetime
from typing import List, Optional
from sqlalchemy import (
Boolean,
CheckConstraint,
DateTime,
ForeignKey,
Index,
Integer,
String,
UniqueConstraint,
)
from sqlalchemy.orm import Mapped, mapped_column, relationship
from sqlalchemy.sql import func
from sqlalchemy.ext.mutable import MutableList
from sqlalchemy import JSON
from config.database import Base
class NavigationGroup(Base):
__tablename__ = "navigation_groups"
__table_args__ = (
UniqueConstraint("slug", name="uq_navigation_groups_slug"),
Index("ix_navigation_groups_sort_order", "sort_order"),
)
id: Mapped[int] = mapped_column(Integer, primary_key=True)
slug: Mapped[str] = mapped_column(String(64), nullable=False)
label: Mapped[str] = mapped_column(String(128), nullable=False)
sort_order: Mapped[int] = mapped_column(
Integer, nullable=False, default=100)
icon: Mapped[Optional[str]] = mapped_column(String(64))
tooltip: Mapped[Optional[str]] = mapped_column(String(255))
is_enabled: Mapped[bool] = mapped_column(
Boolean, nullable=False, default=True)
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
updated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now(), onupdate=func.now()
)
links: Mapped[List["NavigationLink"]] = relationship(
"NavigationLink",
back_populates="group",
cascade="all, delete-orphan",
order_by="NavigationLink.sort_order",
)
def __repr__(self) -> str: # pragma: no cover
return f"NavigationGroup(id={self.id!r}, slug={self.slug!r})"
class NavigationLink(Base):
__tablename__ = "navigation_links"
__table_args__ = (
UniqueConstraint("group_id", "slug",
name="uq_navigation_links_group_slug"),
Index("ix_navigation_links_group_sort", "group_id", "sort_order"),
Index("ix_navigation_links_parent_sort",
"parent_link_id", "sort_order"),
CheckConstraint(
"(route_name IS NOT NULL) OR (href_override IS NOT NULL)",
name="ck_navigation_links_route_or_href",
),
)
id: Mapped[int] = mapped_column(Integer, primary_key=True)
group_id: Mapped[int] = mapped_column(
ForeignKey("navigation_groups.id", ondelete="CASCADE"), nullable=False
)
parent_link_id: Mapped[Optional[int]] = mapped_column(
ForeignKey("navigation_links.id", ondelete="CASCADE")
)
slug: Mapped[str] = mapped_column(String(64), nullable=False)
label: Mapped[str] = mapped_column(String(128), nullable=False)
route_name: Mapped[Optional[str]] = mapped_column(String(128))
href_override: Mapped[Optional[str]] = mapped_column(String(512))
match_prefix: Mapped[Optional[str]] = mapped_column(String(512))
sort_order: Mapped[int] = mapped_column(
Integer, nullable=False, default=100)
icon: Mapped[Optional[str]] = mapped_column(String(64))
tooltip: Mapped[Optional[str]] = mapped_column(String(255))
required_roles: Mapped[list[str]] = mapped_column(
MutableList.as_mutable(JSON), nullable=False, default=list
)
is_enabled: Mapped[bool] = mapped_column(
Boolean, nullable=False, default=True)
is_external: Mapped[bool] = mapped_column(
Boolean, nullable=False, default=False)
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
updated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now(), onupdate=func.now()
)
group: Mapped[NavigationGroup] = relationship(
NavigationGroup,
back_populates="links",
)
parent: Mapped[Optional["NavigationLink"]] = relationship(
"NavigationLink",
remote_side="NavigationLink.id",
back_populates="children",
)
children: Mapped[List["NavigationLink"]] = relationship(
"NavigationLink",
back_populates="parent",
cascade="all, delete-orphan",
order_by="NavigationLink.sort_order",
)
def is_visible_for_roles(self, roles: list[str]) -> bool:
if not self.required_roles:
return True
role_set = set(roles)
return any(role in role_set for role in self.required_roles)
def __repr__(self) -> str: # pragma: no cover
return f"NavigationLink(id={self.id!r}, slug={self.slug!r})"

123
models/opex_snapshot.py Normal file
View File

@@ -0,0 +1,123 @@
from __future__ import annotations
from datetime import datetime
from typing import TYPE_CHECKING
from sqlalchemy import JSON, Boolean, DateTime, ForeignKey, Integer, Numeric, String
from sqlalchemy.orm import Mapped, mapped_column, relationship
from sqlalchemy.sql import func
from config.database import Base
if TYPE_CHECKING: # pragma: no cover
from .project import Project
from .scenario import Scenario
from .user import User
class ProjectOpexSnapshot(Base):
"""Snapshot of recurring opex metrics at the project level."""
__tablename__ = "project_opex_snapshots"
id: Mapped[int] = mapped_column(Integer, primary_key=True)
project_id: Mapped[int] = mapped_column(
ForeignKey("projects.id", ondelete="CASCADE"), nullable=False, index=True
)
created_by_id: Mapped[int | None] = mapped_column(
ForeignKey("users.id", ondelete="SET NULL"), nullable=True, index=True
)
calculation_source: Mapped[str | None] = mapped_column(
String(64), nullable=True)
calculated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
currency_code: Mapped[str | None] = mapped_column(String(3), nullable=True)
overall_annual: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True)
escalated_total: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True)
annual_average: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True)
evaluation_horizon_years: Mapped[int | None] = mapped_column(
Integer, nullable=True)
escalation_pct: Mapped[float | None] = mapped_column(
Numeric(12, 6), nullable=True)
apply_escalation: Mapped[bool] = mapped_column(
Boolean, nullable=False, default=True)
component_count: Mapped[int | None] = mapped_column(Integer, nullable=True)
payload: Mapped[dict | None] = mapped_column(JSON, nullable=True)
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
updated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now(), onupdate=func.now()
)
project: Mapped[Project] = relationship(
"Project", back_populates="opex_snapshots"
)
created_by: Mapped[User | None] = relationship("User")
def __repr__(self) -> str: # pragma: no cover
return (
"ProjectOpexSnapshot(id={id!r}, project_id={project_id!r}, overall_annual={overall_annual!r})".format(
id=self.id,
project_id=self.project_id,
overall_annual=self.overall_annual,
)
)
class ScenarioOpexSnapshot(Base):
"""Snapshot of opex metrics for an individual scenario."""
__tablename__ = "scenario_opex_snapshots"
id: Mapped[int] = mapped_column(Integer, primary_key=True)
scenario_id: Mapped[int] = mapped_column(
ForeignKey("scenarios.id", ondelete="CASCADE"), nullable=False, index=True
)
created_by_id: Mapped[int | None] = mapped_column(
ForeignKey("users.id", ondelete="SET NULL"), nullable=True, index=True
)
calculation_source: Mapped[str | None] = mapped_column(
String(64), nullable=True)
calculated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
currency_code: Mapped[str | None] = mapped_column(String(3), nullable=True)
overall_annual: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True)
escalated_total: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True)
annual_average: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True)
evaluation_horizon_years: Mapped[int | None] = mapped_column(
Integer, nullable=True)
escalation_pct: Mapped[float | None] = mapped_column(
Numeric(12, 6), nullable=True)
apply_escalation: Mapped[bool] = mapped_column(
Boolean, nullable=False, default=True)
component_count: Mapped[int | None] = mapped_column(Integer, nullable=True)
payload: Mapped[dict | None] = mapped_column(JSON, nullable=True)
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
updated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now(), onupdate=func.now()
)
scenario: Mapped[Scenario] = relationship(
"Scenario", back_populates="opex_snapshots"
)
created_by: Mapped[User | None] = relationship("User")
def __repr__(self) -> str: # pragma: no cover
return (
"ScenarioOpexSnapshot(id={id!r}, scenario_id={scenario_id!r}, overall_annual={overall_annual!r})".format(
id=self.id,
scenario_id=self.scenario_id,
overall_annual=self.overall_annual,
)
)

View File

@@ -0,0 +1,24 @@
from __future__ import annotations
from datetime import datetime
from sqlalchemy import Column, DateTime, Float, Integer, String
from config.database import Base
class PerformanceMetric(Base):
__tablename__ = "performance_metrics"
id = Column(Integer, primary_key=True, index=True)
timestamp = Column(DateTime, default=datetime.utcnow, index=True)
metric_name = Column(String, index=True)
value = Column(Float)
labels = Column(String) # JSON string of labels
endpoint = Column(String, index=True, nullable=True)
method = Column(String, nullable=True)
status_code = Column(Integer, nullable=True)
duration_seconds = Column(Float, nullable=True)
def __repr__(self) -> str:
return f"<PerformanceMetric(id={self.id}, name={self.metric_name}, value={self.value})>"

176
models/pricing_settings.py Normal file
View File

@@ -0,0 +1,176 @@
"""Database models for persisted pricing configuration settings."""
from __future__ import annotations
from datetime import datetime
from typing import TYPE_CHECKING
from sqlalchemy import (
JSON,
DateTime,
ForeignKey,
Integer,
Numeric,
String,
Text,
UniqueConstraint,
)
from sqlalchemy.orm import Mapped, mapped_column, relationship, validates
from sqlalchemy.sql import func
from config.database import Base
from services.currency import normalise_currency
if TYPE_CHECKING: # pragma: no cover
from .project import Project
class PricingSettings(Base):
"""Persisted pricing defaults applied to scenario evaluations."""
__tablename__ = "pricing_settings"
id: Mapped[int] = mapped_column(Integer, primary_key=True)
name: Mapped[str] = mapped_column(String(128), nullable=False, unique=True)
slug: Mapped[str] = mapped_column(String(64), nullable=False, unique=True)
description: Mapped[str | None] = mapped_column(Text, nullable=True)
default_currency: Mapped[str | None] = mapped_column(
String(3), nullable=True)
default_payable_pct: Mapped[float] = mapped_column(
Numeric(5, 2), nullable=False, default=100.0
)
moisture_threshold_pct: Mapped[float] = mapped_column(
Numeric(5, 2), nullable=False, default=8.0
)
moisture_penalty_per_pct: Mapped[float] = mapped_column(
Numeric(14, 4), nullable=False, default=0.0
)
metadata_payload: Mapped[dict | None] = mapped_column(
"metadata", JSON, nullable=True
)
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
updated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now(), onupdate=func.now()
)
metal_overrides: Mapped[list["PricingMetalSettings"]] = relationship(
"PricingMetalSettings",
back_populates="pricing_settings",
cascade="all, delete-orphan",
passive_deletes=True,
)
impurity_overrides: Mapped[list["PricingImpuritySettings"]] = relationship(
"PricingImpuritySettings",
back_populates="pricing_settings",
cascade="all, delete-orphan",
passive_deletes=True,
)
projects: Mapped[list["Project"]] = relationship(
"Project",
back_populates="pricing_settings",
cascade="all",
)
@validates("slug")
def _normalise_slug(self, key: str, value: str) -> str:
return value.strip().lower()
@validates("default_currency")
def _validate_currency(self, key: str, value: str | None) -> str | None:
return normalise_currency(value)
def __repr__(self) -> str: # pragma: no cover
return f"PricingSettings(id={self.id!r}, slug={self.slug!r})"
class PricingMetalSettings(Base):
"""Contract-specific overrides for a particular metal."""
__tablename__ = "pricing_metal_settings"
__table_args__ = (
UniqueConstraint(
"pricing_settings_id", "metal_code", name="uq_pricing_metal_settings_code"
),
)
id: Mapped[int] = mapped_column(Integer, primary_key=True)
pricing_settings_id: Mapped[int] = mapped_column(
ForeignKey("pricing_settings.id", ondelete="CASCADE"), nullable=False, index=True
)
metal_code: Mapped[str] = mapped_column(String(32), nullable=False)
payable_pct: Mapped[float | None] = mapped_column(
Numeric(5, 2), nullable=True)
moisture_threshold_pct: Mapped[float | None] = mapped_column(
Numeric(5, 2), nullable=True)
moisture_penalty_per_pct: Mapped[float | None] = mapped_column(
Numeric(14, 4), nullable=True
)
data: Mapped[dict | None] = mapped_column(JSON, nullable=True)
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
updated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now(), onupdate=func.now()
)
pricing_settings: Mapped["PricingSettings"] = relationship(
"PricingSettings", back_populates="metal_overrides"
)
@validates("metal_code")
def _normalise_metal_code(self, key: str, value: str) -> str:
return value.strip().lower()
def __repr__(self) -> str: # pragma: no cover
return (
"PricingMetalSettings(" # noqa: ISC001
f"id={self.id!r}, pricing_settings_id={self.pricing_settings_id!r}, "
f"metal_code={self.metal_code!r})"
)
class PricingImpuritySettings(Base):
"""Impurity penalty thresholds associated with pricing settings."""
__tablename__ = "pricing_impurity_settings"
__table_args__ = (
UniqueConstraint(
"pricing_settings_id",
"impurity_code",
name="uq_pricing_impurity_settings_code",
),
)
id: Mapped[int] = mapped_column(Integer, primary_key=True)
pricing_settings_id: Mapped[int] = mapped_column(
ForeignKey("pricing_settings.id", ondelete="CASCADE"), nullable=False, index=True
)
impurity_code: Mapped[str] = mapped_column(String(32), nullable=False)
threshold_ppm: Mapped[float] = mapped_column(
Numeric(14, 4), nullable=False, default=0.0)
penalty_per_ppm: Mapped[float] = mapped_column(
Numeric(14, 4), nullable=False, default=0.0)
notes: Mapped[str | None] = mapped_column(Text, nullable=True)
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
updated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now(), onupdate=func.now()
)
pricing_settings: Mapped["PricingSettings"] = relationship(
"PricingSettings", back_populates="impurity_overrides"
)
@validates("impurity_code")
def _normalise_impurity_code(self, key: str, value: str) -> str:
return value.strip().upper()
def __repr__(self) -> str: # pragma: no cover
return (
"PricingImpuritySettings(" # noqa: ISC001
f"id={self.id!r}, pricing_settings_id={self.pricing_settings_id!r}, "
f"impurity_code={self.impurity_code!r})"
)

View File

@@ -0,0 +1,133 @@
from __future__ import annotations
from datetime import datetime
from typing import TYPE_CHECKING
from sqlalchemy import JSON, DateTime, ForeignKey, Integer, Numeric, String
from sqlalchemy.orm import Mapped, mapped_column, relationship
from sqlalchemy.sql import func
from config.database import Base
if TYPE_CHECKING: # pragma: no cover
from .project import Project
from .scenario import Scenario
from .user import User
class ProjectProfitability(Base):
"""Snapshot of aggregated profitability metrics at the project level."""
__tablename__ = "project_profitability_snapshots"
id: Mapped[int] = mapped_column(Integer, primary_key=True)
project_id: Mapped[int] = mapped_column(
ForeignKey("projects.id", ondelete="CASCADE"), nullable=False, index=True
)
created_by_id: Mapped[int | None] = mapped_column(
ForeignKey("users.id", ondelete="SET NULL"), nullable=True, index=True
)
calculation_source: Mapped[str | None] = mapped_column(
String(64), nullable=True)
calculated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
currency_code: Mapped[str | None] = mapped_column(String(3), nullable=True)
npv: Mapped[float | None] = mapped_column(Numeric(18, 2), nullable=True)
irr_pct: Mapped[float | None] = mapped_column(
Numeric(12, 6), nullable=True)
payback_period_years: Mapped[float | None] = mapped_column(
Numeric(12, 4), nullable=True
)
margin_pct: Mapped[float | None] = mapped_column(
Numeric(12, 6), nullable=True)
revenue_total: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True)
opex_total: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True
)
sustaining_capex_total: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True
)
capex: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True)
net_cash_flow_total: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True
)
payload: Mapped[dict | None] = mapped_column(JSON, nullable=True)
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
updated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now(), onupdate=func.now()
)
project: Mapped[Project] = relationship(
"Project", back_populates="profitability_snapshots")
created_by: Mapped[User | None] = relationship("User")
def __repr__(self) -> str: # pragma: no cover
return (
"ProjectProfitability(id={id!r}, project_id={project_id!r}, npv={npv!r})".format(
id=self.id, project_id=self.project_id, npv=self.npv
)
)
class ScenarioProfitability(Base):
"""Snapshot of profitability metrics for an individual scenario."""
__tablename__ = "scenario_profitability_snapshots"
id: Mapped[int] = mapped_column(Integer, primary_key=True)
scenario_id: Mapped[int] = mapped_column(
ForeignKey("scenarios.id", ondelete="CASCADE"), nullable=False, index=True
)
created_by_id: Mapped[int | None] = mapped_column(
ForeignKey("users.id", ondelete="SET NULL"), nullable=True, index=True
)
calculation_source: Mapped[str | None] = mapped_column(
String(64), nullable=True)
calculated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
currency_code: Mapped[str | None] = mapped_column(String(3), nullable=True)
npv: Mapped[float | None] = mapped_column(Numeric(18, 2), nullable=True)
irr_pct: Mapped[float | None] = mapped_column(
Numeric(12, 6), nullable=True)
payback_period_years: Mapped[float | None] = mapped_column(
Numeric(12, 4), nullable=True
)
margin_pct: Mapped[float | None] = mapped_column(
Numeric(12, 6), nullable=True)
revenue_total: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True)
opex_total: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True
)
sustaining_capex_total: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True
)
capex: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True)
net_cash_flow_total: Mapped[float | None] = mapped_column(
Numeric(18, 2), nullable=True
)
payload: Mapped[dict | None] = mapped_column(JSON, nullable=True)
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
updated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now(), onupdate=func.now()
)
scenario: Mapped[Scenario] = relationship(
"Scenario", back_populates="profitability_snapshots")
created_by: Mapped[User | None] = relationship("User")
def __repr__(self) -> str: # pragma: no cover
return (
"ScenarioProfitability(id={id!r}, scenario_id={scenario_id!r}, npv={npv!r})".format(
id=self.id, scenario_id=self.scenario_id, npv=self.npv
)
)

104
models/project.py Normal file
View File

@@ -0,0 +1,104 @@
from __future__ import annotations
from datetime import datetime
from typing import TYPE_CHECKING, List
from .enums import MiningOperationType, sql_enum
from .profitability_snapshot import ProjectProfitability
from .capex_snapshot import ProjectCapexSnapshot
from .opex_snapshot import ProjectOpexSnapshot
from sqlalchemy import DateTime, ForeignKey, Integer, String, Text
from sqlalchemy.orm import Mapped, mapped_column, relationship
from sqlalchemy.sql import func
from config.database import Base
if TYPE_CHECKING: # pragma: no cover
from .scenario import Scenario
from .pricing_settings import PricingSettings
class Project(Base):
"""Top-level mining project grouping multiple scenarios."""
__tablename__ = "projects"
id: Mapped[int] = mapped_column(Integer, primary_key=True, index=True)
name: Mapped[str] = mapped_column(String(255), nullable=False, unique=True)
location: Mapped[str | None] = mapped_column(String(255), nullable=True)
operation_type: Mapped[MiningOperationType] = mapped_column(
sql_enum(MiningOperationType, name="miningoperationtype"),
nullable=False,
default=MiningOperationType.OTHER,
)
description: Mapped[str | None] = mapped_column(Text, nullable=True)
pricing_settings_id: Mapped[int | None] = mapped_column(
ForeignKey("pricing_settings.id", ondelete="SET NULL"),
nullable=True,
)
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
updated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now(), onupdate=func.now()
)
scenarios: Mapped[List["Scenario"]] = relationship(
"Scenario",
back_populates="project",
cascade="all, delete-orphan",
passive_deletes=True,
)
pricing_settings: Mapped["PricingSettings | None"] = relationship(
"PricingSettings",
back_populates="projects",
)
profitability_snapshots: Mapped[List["ProjectProfitability"]] = relationship(
"ProjectProfitability",
back_populates="project",
cascade="all, delete-orphan",
order_by=lambda: ProjectProfitability.calculated_at.desc(),
passive_deletes=True,
)
capex_snapshots: Mapped[List["ProjectCapexSnapshot"]] = relationship(
"ProjectCapexSnapshot",
back_populates="project",
cascade="all, delete-orphan",
order_by=lambda: ProjectCapexSnapshot.calculated_at.desc(),
passive_deletes=True,
)
opex_snapshots: Mapped[List["ProjectOpexSnapshot"]] = relationship(
"ProjectOpexSnapshot",
back_populates="project",
cascade="all, delete-orphan",
order_by=lambda: ProjectOpexSnapshot.calculated_at.desc(),
passive_deletes=True,
)
@property
def latest_profitability(self) -> "ProjectProfitability | None":
"""Return the most recent profitability snapshot, if any."""
if not self.profitability_snapshots:
return None
return self.profitability_snapshots[0]
@property
def latest_capex(self) -> "ProjectCapexSnapshot | None":
"""Return the most recent capex snapshot, if any."""
if not self.capex_snapshots:
return None
return self.capex_snapshots[0]
@property
def latest_opex(self) -> "ProjectOpexSnapshot | None":
"""Return the most recent opex snapshot, if any."""
if not self.opex_snapshots:
return None
return self.opex_snapshots[0]
def __repr__(self) -> str: # pragma: no cover - helpful for debugging
return f"Project(id={self.id!r}, name={self.name!r})"

133
models/scenario.py Normal file
View File

@@ -0,0 +1,133 @@
from __future__ import annotations
from datetime import date, datetime
from typing import TYPE_CHECKING, List
from sqlalchemy import (
Date,
DateTime,
ForeignKey,
Integer,
Numeric,
String,
Text,
UniqueConstraint,
)
from sqlalchemy.orm import Mapped, mapped_column, relationship, validates
from sqlalchemy.sql import func
from config.database import Base
from services.currency import normalise_currency
from .enums import ResourceType, ScenarioStatus, sql_enum
from .profitability_snapshot import ScenarioProfitability
from .capex_snapshot import ScenarioCapexSnapshot
from .opex_snapshot import ScenarioOpexSnapshot
if TYPE_CHECKING: # pragma: no cover
from .financial_input import FinancialInput
from .project import Project
from .simulation_parameter import SimulationParameter
class Scenario(Base):
"""A specific configuration of assumptions for a project."""
__tablename__ = "scenarios"
__table_args__ = (
UniqueConstraint("project_id", "name",
name="uq_scenarios_project_name"),
)
id: Mapped[int] = mapped_column(Integer, primary_key=True, index=True)
project_id: Mapped[int] = mapped_column(
ForeignKey("projects.id", ondelete="CASCADE"), nullable=False, index=True
)
name: Mapped[str] = mapped_column(String(255), nullable=False)
description: Mapped[str | None] = mapped_column(Text, nullable=True)
status: Mapped[ScenarioStatus] = mapped_column(
sql_enum(ScenarioStatus, name="scenariostatus"),
nullable=False,
default=ScenarioStatus.DRAFT,
)
start_date: Mapped[date | None] = mapped_column(Date, nullable=True)
end_date: Mapped[date | None] = mapped_column(Date, nullable=True)
discount_rate: Mapped[float | None] = mapped_column(
Numeric(5, 2), nullable=True)
currency: Mapped[str | None] = mapped_column(String(3), nullable=True)
primary_resource: Mapped[ResourceType | None] = mapped_column(
sql_enum(ResourceType, name="resourcetype"), nullable=True
)
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
updated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now(), onupdate=func.now()
)
project: Mapped["Project"] = relationship(
"Project", back_populates="scenarios")
financial_inputs: Mapped[List["FinancialInput"]] = relationship(
"FinancialInput",
back_populates="scenario",
cascade="all, delete-orphan",
passive_deletes=True,
)
simulation_parameters: Mapped[List["SimulationParameter"]] = relationship(
"SimulationParameter",
back_populates="scenario",
cascade="all, delete-orphan",
passive_deletes=True,
)
profitability_snapshots: Mapped[List["ScenarioProfitability"]] = relationship(
"ScenarioProfitability",
back_populates="scenario",
cascade="all, delete-orphan",
order_by=lambda: ScenarioProfitability.calculated_at.desc(),
passive_deletes=True,
)
capex_snapshots: Mapped[List["ScenarioCapexSnapshot"]] = relationship(
"ScenarioCapexSnapshot",
back_populates="scenario",
cascade="all, delete-orphan",
order_by=lambda: ScenarioCapexSnapshot.calculated_at.desc(),
passive_deletes=True,
)
opex_snapshots: Mapped[List["ScenarioOpexSnapshot"]] = relationship(
"ScenarioOpexSnapshot",
back_populates="scenario",
cascade="all, delete-orphan",
order_by=lambda: ScenarioOpexSnapshot.calculated_at.desc(),
passive_deletes=True,
)
@validates("currency")
def _normalise_currency(self, key: str, value: str | None) -> str | None:
# Normalise to uppercase ISO-4217; raises when the code is malformed.
return normalise_currency(value)
def __repr__(self) -> str: # pragma: no cover
return f"Scenario(id={self.id!r}, name={self.name!r}, project_id={self.project_id!r})"
@property
def latest_profitability(self) -> "ScenarioProfitability | None":
"""Return the most recent profitability snapshot for this scenario."""
if not self.profitability_snapshots:
return None
return self.profitability_snapshots[0]
@property
def latest_capex(self) -> "ScenarioCapexSnapshot | None":
"""Return the most recent capex snapshot for this scenario."""
if not self.capex_snapshots:
return None
return self.capex_snapshots[0]
@property
def latest_opex(self) -> "ScenarioOpexSnapshot | None":
"""Return the most recent opex snapshot for this scenario."""
if not self.opex_snapshots:
return None
return self.opex_snapshots[0]

View File

@@ -0,0 +1,69 @@
from __future__ import annotations
from datetime import datetime
from typing import TYPE_CHECKING
from .enums import DistributionType, ResourceType, StochasticVariable, sql_enum
from sqlalchemy import (
JSON,
DateTime,
ForeignKey,
Integer,
Numeric,
String,
)
from sqlalchemy.orm import Mapped, mapped_column, relationship
from sqlalchemy.sql import func
from config.database import Base
if TYPE_CHECKING: # pragma: no cover
from .scenario import Scenario
class SimulationParameter(Base):
"""Probability distribution settings for scenario simulations."""
__tablename__ = "simulation_parameters"
id: Mapped[int] = mapped_column(Integer, primary_key=True)
scenario_id: Mapped[int] = mapped_column(
ForeignKey("scenarios.id", ondelete="CASCADE"), nullable=False, index=True
)
name: Mapped[str] = mapped_column(String(255), nullable=False)
distribution: Mapped[DistributionType] = mapped_column(
sql_enum(DistributionType, name="distributiontype"), nullable=False
)
variable: Mapped[StochasticVariable | None] = mapped_column(
sql_enum(StochasticVariable, name="stochasticvariable"), nullable=True
)
resource_type: Mapped[ResourceType | None] = mapped_column(
sql_enum(ResourceType, name="resourcetype"), nullable=True
)
mean_value: Mapped[float | None] = mapped_column(
Numeric(18, 4), nullable=True)
standard_deviation: Mapped[float | None] = mapped_column(
Numeric(18, 4), nullable=True)
minimum_value: Mapped[float | None] = mapped_column(
Numeric(18, 4), nullable=True)
maximum_value: Mapped[float | None] = mapped_column(
Numeric(18, 4), nullable=True)
unit: Mapped[str | None] = mapped_column(String(32), nullable=True)
configuration: Mapped[dict | None] = mapped_column(JSON, nullable=True)
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
updated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now(), onupdate=func.now()
)
scenario: Mapped["Scenario"] = relationship(
"Scenario", back_populates="simulation_parameters"
)
def __repr__(self) -> str: # pragma: no cover
return (
f"SimulationParameter(id={self.id!r}, scenario_id={self.scenario_id!r}, "
f"name={self.name!r})"
)

176
models/user.py Normal file
View File

@@ -0,0 +1,176 @@
from __future__ import annotations
from datetime import datetime
from typing import List, Optional
from passlib.context import CryptContext
try: # pragma: no cover - defensive compatibility shim
import importlib.metadata as importlib_metadata
import argon2 # type: ignore
setattr(argon2, "__version__", importlib_metadata.version("argon2-cffi"))
except Exception:
pass
from sqlalchemy import (
Boolean,
DateTime,
ForeignKey,
Integer,
String,
Text,
UniqueConstraint,
)
from sqlalchemy.orm import Mapped, mapped_column, relationship
from sqlalchemy.sql import func
from config.database import Base
# Configure password hashing strategy. Argon2 provides strong resistance against
# GPU-based cracking attempts, aligning with the security plan.
password_context = CryptContext(schemes=["argon2"], deprecated="auto")
class User(Base):
"""Authenticated platform user with optional elevated privileges."""
__tablename__ = "users"
__table_args__ = (
UniqueConstraint("email", name="uq_users_email"),
UniqueConstraint("username", name="uq_users_username"),
)
id: Mapped[int] = mapped_column(Integer, primary_key=True)
email: Mapped[str] = mapped_column(String(255), nullable=False)
username: Mapped[str] = mapped_column(String(128), nullable=False)
password_hash: Mapped[str] = mapped_column(String(255), nullable=False)
is_active: Mapped[bool] = mapped_column(
Boolean, nullable=False, default=True)
is_superuser: Mapped[bool] = mapped_column(
Boolean, nullable=False, default=False)
last_login_at: Mapped[datetime | None] = mapped_column(
DateTime(timezone=True), nullable=True
)
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
updated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now(), onupdate=func.now()
)
role_assignments: Mapped[List["UserRole"]] = relationship(
"UserRole",
back_populates="user",
cascade="all, delete-orphan",
foreign_keys="UserRole.user_id",
)
roles: Mapped[List["Role"]] = relationship(
"Role",
secondary="user_roles",
primaryjoin="User.id == UserRole.user_id",
secondaryjoin="Role.id == UserRole.role_id",
viewonly=True,
back_populates="users",
)
def set_password(self, raw_password: str) -> None:
"""Hash and store a password for the user."""
self.password_hash = self.hash_password(raw_password)
@staticmethod
def hash_password(raw_password: str) -> str:
"""Return the Argon2 hash for a clear-text password."""
return password_context.hash(raw_password)
def verify_password(self, candidate_password: str) -> bool:
"""Validate a password against the stored hash."""
if not self.password_hash:
return False
return password_context.verify(candidate_password, self.password_hash)
def __repr__(self) -> str: # pragma: no cover - helpful for debugging
return f"User(id={self.id!r}, email={self.email!r})"
class Role(Base):
"""Role encapsulating a set of permissions."""
__tablename__ = "roles"
id: Mapped[int] = mapped_column(Integer, primary_key=True)
name: Mapped[str] = mapped_column(String(64), nullable=False, unique=True)
display_name: Mapped[str] = mapped_column(String(128), nullable=False)
description: Mapped[str | None] = mapped_column(Text, nullable=True)
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
updated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now(), onupdate=func.now()
)
assignments: Mapped[List["UserRole"]] = relationship(
"UserRole",
back_populates="role",
cascade="all, delete-orphan",
foreign_keys="UserRole.role_id",
)
users: Mapped[List["User"]] = relationship(
"User",
secondary="user_roles",
primaryjoin="Role.id == UserRole.role_id",
secondaryjoin="User.id == UserRole.user_id",
viewonly=True,
back_populates="roles",
)
def __repr__(self) -> str: # pragma: no cover - helpful for debugging
return f"Role(id={self.id!r}, name={self.name!r})"
class UserRole(Base):
"""Association between users and roles with assignment metadata."""
__tablename__ = "user_roles"
__table_args__ = (
UniqueConstraint("user_id", "role_id", name="uq_user_roles_user_role"),
)
user_id: Mapped[int] = mapped_column(
Integer,
ForeignKey("users.id", ondelete="CASCADE"),
primary_key=True,
)
role_id: Mapped[int] = mapped_column(
Integer,
ForeignKey("roles.id", ondelete="CASCADE"),
primary_key=True,
)
granted_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), nullable=False, server_default=func.now()
)
granted_by: Mapped[Optional[int]] = mapped_column(
Integer,
ForeignKey("users.id", ondelete="SET NULL"),
nullable=True,
)
user: Mapped["User"] = relationship(
"User",
foreign_keys=[user_id],
back_populates="role_assignments",
)
role: Mapped["Role"] = relationship(
"Role",
foreign_keys=[role_id],
back_populates="assignments",
)
granted_by_user: Mapped[Optional["User"]] = relationship(
"User",
foreign_keys=[granted_by],
)
def __repr__(self) -> str: # pragma: no cover - debugging helper
return f"UserRole(user_id={self.user_id!r}, role_id={self.role_id!r})"

117
monitoring/__init__.py Normal file
View File

@@ -0,0 +1,117 @@
from __future__ import annotations
from datetime import datetime, timedelta
from typing import Optional
from fastapi import APIRouter, Depends, Query, Response
from prometheus_client import CONTENT_TYPE_LATEST, generate_latest
from sqlalchemy.orm import Session
from config.database import get_db
from services.metrics import MetricsService
router = APIRouter(prefix="/metrics", tags=["monitoring"])
@router.get("", summary="Prometheus metrics endpoint", include_in_schema=False)
async def metrics_endpoint() -> Response:
payload = generate_latest()
return Response(content=payload, media_type=CONTENT_TYPE_LATEST)
@router.get("/performance", summary="Get performance metrics")
async def get_performance_metrics(
metric_name: Optional[str] = Query(
None, description="Filter by metric name"),
hours: int = Query(24, description="Hours back to look"),
db: Session = Depends(get_db),
) -> dict:
"""Get aggregated performance metrics."""
service = MetricsService(db)
start_time = datetime.utcnow() - timedelta(hours=hours)
if metric_name:
metrics = service.get_metrics(
metric_name=metric_name, start_time=start_time)
aggregated = service.get_aggregated_metrics(
metric_name, start_time=start_time)
return {
"metric_name": metric_name,
"period_hours": hours,
"aggregated": aggregated,
"recent_samples": [
{
"timestamp": m.timestamp.isoformat(),
"value": m.value,
"labels": m.labels,
"endpoint": m.endpoint,
"method": m.method,
"status_code": m.status_code,
"duration_seconds": m.duration_seconds,
}
for m in metrics[:50] # Last 50 samples
],
}
# Return summary for all metrics
all_metrics = service.get_metrics(start_time=start_time, limit=1000)
metric_types = {}
for m in all_metrics:
if m.metric_name not in metric_types:
metric_types[m.metric_name] = []
metric_types[m.metric_name].append(m.value)
summary = {}
for name, values in metric_types.items():
summary[name] = {
"count": len(values),
"avg": sum(values) / len(values) if values else 0,
"min": min(values) if values else 0,
"max": max(values) if values else 0,
}
return {
"period_hours": hours,
"summary": summary,
}
@router.get("/health", summary="Detailed health check with metrics")
async def detailed_health(db: Session = Depends(get_db)) -> dict:
"""Get detailed health status with recent metrics."""
service = MetricsService(db)
last_hour = datetime.utcnow() - timedelta(hours=1)
# Get request metrics from last hour
request_metrics = service.get_metrics(
metric_name="http_request", start_time=last_hour
)
if request_metrics:
durations = []
error_count = 0
for m in request_metrics:
if m.duration_seconds is not None:
durations.append(m.duration_seconds)
if m.status_code is not None:
if m.status_code >= 400:
error_count += 1
total_requests = len(request_metrics)
avg_duration = sum(durations) / len(durations) if durations else 0
error_rate = error_count / total_requests if total_requests > 0 else 0
else:
avg_duration = 0
error_rate = 0
total_requests = 0
return {
"status": "ok",
"timestamp": datetime.utcnow().isoformat(),
"metrics": {
"requests_last_hour": total_requests,
"avg_response_time_seconds": avg_duration,
"error_rate": error_rate,
},
}

108
monitoring/metrics.py Normal file
View File

@@ -0,0 +1,108 @@
from __future__ import annotations
from prometheus_client import Counter, Histogram, Gauge
IMPORT_DURATION = Histogram(
"calminer_import_duration_seconds",
"Duration of import preview and commit operations",
labelnames=("dataset", "action", "status"),
)
IMPORT_TOTAL = Counter(
"calminer_import_total",
"Count of import operations",
labelnames=("dataset", "action", "status"),
)
EXPORT_DURATION = Histogram(
"calminer_export_duration_seconds",
"Duration of export operations",
labelnames=("dataset", "status", "format"),
)
EXPORT_TOTAL = Counter(
"calminer_export_total",
"Count of export operations",
labelnames=("dataset", "status", "format"),
)
# General performance metrics
REQUEST_DURATION = Histogram(
"calminer_request_duration_seconds",
"Duration of HTTP requests",
labelnames=("method", "endpoint", "status"),
)
REQUEST_TOTAL = Counter(
"calminer_request_total",
"Count of HTTP requests",
labelnames=("method", "endpoint", "status"),
)
ACTIVE_CONNECTIONS = Gauge(
"calminer_active_connections",
"Number of active connections",
)
DB_CONNECTIONS = Gauge(
"calminer_db_connections",
"Number of database connections",
)
# Business metrics
PROJECT_OPERATIONS = Counter(
"calminer_project_operations_total",
"Count of project operations",
labelnames=("operation", "status"),
)
SCENARIO_OPERATIONS = Counter(
"calminer_scenario_operations_total",
"Count of scenario operations",
labelnames=("operation", "status"),
)
SIMULATION_RUNS = Counter(
"calminer_simulation_runs_total",
"Count of Monte Carlo simulation runs",
labelnames=("status",),
)
SIMULATION_DURATION = Histogram(
"calminer_simulation_duration_seconds",
"Duration of Monte Carlo simulations",
labelnames=("status",),
)
def observe_import(action: str, dataset: str, status: str, seconds: float) -> None:
IMPORT_TOTAL.labels(dataset=dataset, action=action, status=status).inc()
IMPORT_DURATION.labels(dataset=dataset, action=action,
status=status).observe(seconds)
def observe_export(dataset: str, status: str, export_format: str, seconds: float) -> None:
EXPORT_TOTAL.labels(dataset=dataset, status=status,
format=export_format).inc()
EXPORT_DURATION.labels(dataset=dataset, status=status,
format=export_format).observe(seconds)
def observe_request(method: str, endpoint: str, status: int, seconds: float) -> None:
REQUEST_TOTAL.labels(method=method, endpoint=endpoint, status=status).inc()
REQUEST_DURATION.labels(method=method, endpoint=endpoint,
status=status).observe(seconds)
def observe_project_operation(operation: str, status: str = "success") -> None:
PROJECT_OPERATIONS.labels(operation=operation, status=status).inc()
def observe_scenario_operation(operation: str, status: str = "success") -> None:
SCENARIO_OPERATIONS.labels(operation=operation, status=status).inc()
def observe_simulation(status: str, duration_seconds: float) -> None:
SIMULATION_RUNS.labels(status=status).inc()
SIMULATION_DURATION.labels(status=status).observe(duration_seconds)

View File

@@ -14,3 +14,33 @@ exclude = '''
)/
'''
[tool.pytest.ini_options]
pythonpath = ["."]
testpaths = ["tests"]
addopts = "-ra --strict-config --strict-markers --cov=. --cov-report=term-missing --cov-report=xml --cov-fail-under=80"
markers = [
"asyncio: marks tests as async (using pytest-asyncio)",
]
[tool.coverage.run]
branch = true
source = ["."]
omit = [
"tests/*",
"scripts/*",
"main.py",
"routes/reports.py",
"routes/calculations.py",
"services/calculations.py",
"services/importers.py",
"services/reporting.py",
]
[tool.coverage.report]
skip_empty = true
show_missing = true
[tool.bandit]
exclude_dirs = ["scripts"]
skips = ["B101", "B601"] # B101: assert_used, B601: shell_injection (may be false positives)

1
requirements-dev.txt Normal file
View File

@@ -0,0 +1 @@
-r requirements.txt

View File

@@ -1,7 +1,9 @@
pytest
pytest-asyncio
pytest-cov
pytest-httpx
python-jose
ruff
black
mypy
bandit

View File

@@ -9,4 +9,9 @@ jinja2
pandas
numpy
passlib
argon2-cffi
python-jose
python-multipart
openpyxl
prometheus-client
plotly

1
routes/__init__.py Normal file
View File

@@ -0,0 +1 @@
"""API route registrations."""

484
routes/auth.py Normal file
View File

@@ -0,0 +1,484 @@
from __future__ import annotations
from datetime import datetime, timedelta, timezone
from typing import Any, Iterable
from fastapi import APIRouter, Depends, HTTPException, Request, UploadFile, status
from fastapi.responses import HTMLResponse, RedirectResponse
from pydantic import ValidationError
from starlette.datastructures import FormData
from dependencies import (
get_auth_session,
get_jwt_settings,
get_session_strategy,
get_unit_of_work,
require_current_user,
)
from models import Role, User
from schemas.auth import (
LoginForm,
PasswordResetForm,
PasswordResetRequestForm,
RegistrationForm,
)
from services.exceptions import EntityConflictError
from services.security import (
JWTSettings,
TokenDecodeError,
TokenExpiredError,
TokenTypeMismatchError,
create_access_token,
create_refresh_token,
decode_access_token,
hash_password,
verify_password,
)
from services.session import (
AuthSession,
SessionStrategy,
clear_session_cookies,
set_session_cookies,
)
from services.repositories import RoleRepository, UserRepository
from services.unit_of_work import UnitOfWork
from routes.template_filters import create_templates
router = APIRouter(tags=["Authentication"])
templates = create_templates()
_PASSWORD_RESET_SCOPE = "password-reset"
_AUTH_SCOPE = "auth"
def _template(
request: Request,
template_name: str,
context: dict[str, Any],
*,
status_code: int = status.HTTP_200_OK,
) -> HTMLResponse:
return templates.TemplateResponse(
request,
template_name,
context,
status_code=status_code,
)
def _validation_errors(exc: ValidationError) -> list[str]:
return [error.get("msg", "Invalid input.") for error in exc.errors()]
def _scopes(include: Iterable[str]) -> list[str]:
return list(include)
def _normalise_form_data(form_data: FormData) -> dict[str, str]:
normalised: dict[str, str] = {}
for key, value in form_data.multi_items():
if isinstance(value, UploadFile):
str_value = value.filename or ""
else:
str_value = str(value)
normalised[key] = str_value
return normalised
def _require_users_repo(uow: UnitOfWork) -> UserRepository:
if not uow.users:
raise RuntimeError("User repository is not initialised")
return uow.users
def _require_roles_repo(uow: UnitOfWork) -> RoleRepository:
if not uow.roles:
raise RuntimeError("Role repository is not initialised")
return uow.roles
@router.get("/login", response_class=HTMLResponse, include_in_schema=False, name="auth.login_form")
def login_form(request: Request) -> HTMLResponse:
return _template(
request,
"login.html",
{
"form_action": request.url_for("auth.login_submit"),
"errors": [],
"username": "",
},
)
@router.post("/login", include_in_schema=False, name="auth.login_submit")
async def login_submit(
request: Request,
uow: UnitOfWork = Depends(get_unit_of_work),
jwt_settings: JWTSettings = Depends(get_jwt_settings),
session_strategy: SessionStrategy = Depends(get_session_strategy),
):
form_data = _normalise_form_data(await request.form())
try:
form = LoginForm(**form_data)
except ValidationError as exc:
return _template(
request,
"login.html",
{
"form_action": request.url_for("auth.login_submit"),
"errors": _validation_errors(exc),
},
status_code=status.HTTP_400_BAD_REQUEST,
)
identifier = form.username
users_repo = _require_users_repo(uow)
user = _lookup_user(users_repo, identifier)
errors: list[str] = []
if not user or not verify_password(form.password, user.password_hash):
errors.append("Invalid username or password.")
elif not user.is_active:
errors.append("Account is inactive. Contact an administrator.")
if errors:
return _template(
request,
"login.html",
{
"form_action": request.url_for("auth.login_submit"),
"errors": errors,
"username": identifier,
},
status_code=status.HTTP_400_BAD_REQUEST,
)
assert user is not None # mypy hint - guarded above
user.last_login_at = datetime.now(timezone.utc)
access_token = create_access_token(
str(user.id),
jwt_settings,
scopes=_scopes((_AUTH_SCOPE,)),
)
refresh_token = create_refresh_token(
str(user.id),
jwt_settings,
scopes=_scopes((_AUTH_SCOPE,)),
)
response = RedirectResponse(
request.url_for("dashboard.home"),
status_code=status.HTTP_303_SEE_OTHER,
)
set_session_cookies(
response,
access_token=access_token,
refresh_token=refresh_token,
strategy=session_strategy,
jwt_settings=jwt_settings,
)
return response
@router.get("/logout", include_in_schema=False, name="auth.logout")
async def logout(
request: Request,
_: User = Depends(require_current_user),
session: AuthSession = Depends(get_auth_session),
session_strategy: SessionStrategy = Depends(get_session_strategy),
) -> RedirectResponse:
session.mark_cleared()
redirect_url = request.url_for(
"auth.login_form").include_query_params(logout="1")
response = RedirectResponse(
redirect_url,
status_code=status.HTTP_303_SEE_OTHER,
)
clear_session_cookies(response, session_strategy)
return response
def _lookup_user(users_repo: UserRepository, identifier: str) -> User | None:
if "@" in identifier:
return users_repo.get_by_email(identifier.lower(), with_roles=True)
return users_repo.get_by_username(identifier, with_roles=True)
@router.get("/register", response_class=HTMLResponse, include_in_schema=False, name="auth.register_form")
def register_form(request: Request) -> HTMLResponse:
return _template(
request,
"register.html",
{
"form_action": request.url_for("auth.register_submit"),
"errors": [],
"form_data": None,
},
)
@router.post("/register", include_in_schema=False, name="auth.register_submit")
async def register_submit(
request: Request,
uow: UnitOfWork = Depends(get_unit_of_work),
):
form_data = _normalise_form_data(await request.form())
try:
form = RegistrationForm(**form_data)
except ValidationError as exc:
return _registration_error_response(request, _validation_errors(exc))
errors: list[str] = []
users_repo = _require_users_repo(uow)
roles_repo = _require_roles_repo(uow)
uow.ensure_default_roles()
if users_repo.get_by_email(form.email):
errors.append("Email is already registered.")
if users_repo.get_by_username(form.username):
errors.append("Username is already taken.")
if errors:
return _registration_error_response(request, errors, form)
user = User(
email=form.email,
username=form.username,
password_hash=hash_password(form.password),
is_active=True,
is_superuser=False,
)
try:
created = users_repo.create(user)
except EntityConflictError:
return _registration_error_response(
request,
["An account with this username or email already exists."],
form,
)
viewer_role = _ensure_viewer_role(roles_repo)
if viewer_role is not None:
users_repo.assign_role(
user_id=created.id,
role_id=viewer_role.id,
granted_by=created.id,
)
redirect_url = request.url_for(
"auth.login_form").include_query_params(registered="1")
return RedirectResponse(
redirect_url,
status_code=status.HTTP_303_SEE_OTHER,
)
def _registration_error_response(
request: Request,
errors: list[str],
form: RegistrationForm | None = None,
) -> HTMLResponse:
context = {
"form_action": request.url_for("auth.register_submit"),
"errors": errors,
"form_data": form.model_dump(exclude={"password", "confirm_password"}) if form else None,
}
return _template(
request,
"register.html",
context,
status_code=status.HTTP_400_BAD_REQUEST,
)
def _ensure_viewer_role(roles_repo: RoleRepository) -> Role | None:
viewer = roles_repo.get_by_name("viewer")
if viewer:
return viewer
return roles_repo.get_by_name("viewer")
@router.get(
"/forgot-password",
response_class=HTMLResponse,
include_in_schema=False,
name="auth.password_reset_request_form",
)
def password_reset_request_form(request: Request) -> HTMLResponse:
return _template(
request,
"forgot_password.html",
{
"form_action": request.url_for("auth.password_reset_request_submit"),
"errors": [],
"message": None,
},
)
@router.post(
"/forgot-password",
include_in_schema=False,
name="auth.password_reset_request_submit",
)
async def password_reset_request_submit(
request: Request,
uow: UnitOfWork = Depends(get_unit_of_work),
jwt_settings: JWTSettings = Depends(get_jwt_settings),
):
form_data = _normalise_form_data(await request.form())
try:
form = PasswordResetRequestForm(**form_data)
except ValidationError as exc:
return _template(
request,
"forgot_password.html",
{
"form_action": request.url_for("auth.password_reset_request_submit"),
"errors": _validation_errors(exc),
"message": None,
},
status_code=status.HTTP_400_BAD_REQUEST,
)
users_repo = _require_users_repo(uow)
user = users_repo.get_by_email(form.email)
if not user:
return _template(
request,
"forgot_password.html",
{
"form_action": request.url_for("auth.password_reset_request_submit"),
"errors": [],
"message": "If an account exists, a reset link has been sent.",
},
)
token = create_access_token(
str(user.id),
jwt_settings,
scopes=_scopes((_PASSWORD_RESET_SCOPE,)),
expires_delta=timedelta(hours=1),
)
reset_url = request.url_for(
"auth.password_reset_form").include_query_params(token=token)
return RedirectResponse(reset_url, status_code=status.HTTP_303_SEE_OTHER)
@router.get(
"/reset-password",
response_class=HTMLResponse,
include_in_schema=False,
name="auth.password_reset_form",
)
def password_reset_form(
request: Request,
token: str | None = None,
jwt_settings: JWTSettings = Depends(get_jwt_settings),
) -> HTMLResponse:
errors: list[str] = []
if not token:
errors.append("Missing password reset token.")
else:
try:
payload = decode_access_token(token, jwt_settings)
if _PASSWORD_RESET_SCOPE not in payload.scopes:
errors.append("Invalid token scope.")
except TokenExpiredError:
errors.append(
"Token has expired. Please request a new password reset.")
except (TokenDecodeError, TokenTypeMismatchError):
errors.append("Invalid password reset token.")
return _template(
request,
"reset_password.html",
{
"form_action": request.url_for("auth.password_reset_submit"),
"token": token,
"errors": errors,
},
status_code=status.HTTP_400_BAD_REQUEST if errors else status.HTTP_200_OK,
)
@router.post(
"/reset-password",
include_in_schema=False,
name="auth.password_reset_submit",
)
async def password_reset_submit(
request: Request,
uow: UnitOfWork = Depends(get_unit_of_work),
jwt_settings: JWTSettings = Depends(get_jwt_settings),
):
form_data = _normalise_form_data(await request.form())
try:
form = PasswordResetForm(**form_data)
except ValidationError as exc:
return _template(
request,
"reset_password.html",
{
"form_action": request.url_for("auth.password_reset_submit"),
"token": form_data.get("token"),
"errors": _validation_errors(exc),
},
status_code=status.HTTP_400_BAD_REQUEST,
)
try:
payload = decode_access_token(form.token, jwt_settings)
except TokenExpiredError:
return _reset_error_response(
request,
form.token,
"Token has expired. Please request a new password reset.",
)
except (TokenDecodeError, TokenTypeMismatchError):
return _reset_error_response(
request,
form.token,
"Invalid password reset token.",
)
if _PASSWORD_RESET_SCOPE not in payload.scopes:
return _reset_error_response(
request,
form.token,
"Invalid password reset token scope.",
)
users_repo = _require_users_repo(uow)
user_id = int(payload.sub)
user = users_repo.get(user_id)
if not user:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail="User not found")
user.set_password(form.password)
if not user.is_active:
user.is_active = True
redirect_url = request.url_for(
"auth.login_form").include_query_params(reset="1")
return RedirectResponse(
redirect_url,
status_code=status.HTTP_303_SEE_OTHER,
)
def _reset_error_response(request: Request, token: str, message: str) -> HTMLResponse:
return _template(
request,
"reset_password.html",
{
"form_action": request.url_for("auth.password_reset_submit"),
"token": token,
"errors": [message],
},
status_code=status.HTTP_400_BAD_REQUEST,
)

2119
routes/calculations.py Normal file

File diff suppressed because it is too large Load Diff

130
routes/dashboard.py Normal file
View File

@@ -0,0 +1,130 @@
from __future__ import annotations
from datetime import datetime
from fastapi import APIRouter, Depends, Request
from fastapi.responses import HTMLResponse, RedirectResponse
from routes.template_filters import create_templates
from dependencies import get_current_user, get_unit_of_work
from models import ScenarioStatus, User
from services.unit_of_work import UnitOfWork
router = APIRouter(tags=["Dashboard"])
templates = create_templates()
def _format_timestamp(moment: datetime | None) -> str | None:
if moment is None:
return None
return moment.strftime("%Y-%m-%d")
def _format_timestamp_with_time(moment: datetime | None) -> str | None:
if moment is None:
return None
return moment.strftime("%Y-%m-%d %H:%M")
def _load_metrics(uow: UnitOfWork) -> dict[str, object]:
if not uow.projects or not uow.scenarios or not uow.financial_inputs:
raise RuntimeError("UnitOfWork repositories not initialised")
total_projects = uow.projects.count()
active_scenarios = uow.scenarios.count_by_status(ScenarioStatus.ACTIVE)
pending_simulations = uow.scenarios.count_by_status(ScenarioStatus.DRAFT)
last_import_at = uow.financial_inputs.latest_created_at()
return {
"total_projects": total_projects,
"active_scenarios": active_scenarios,
"pending_simulations": pending_simulations,
"last_import": _format_timestamp(last_import_at),
}
def _load_recent_projects(uow: UnitOfWork) -> list:
if not uow.projects:
raise RuntimeError("Project repository not initialised")
return list(uow.projects.recent(limit=5))
def _load_simulation_updates(uow: UnitOfWork) -> list[dict[str, object]]:
updates: list[dict[str, object]] = []
if not uow.scenarios:
raise RuntimeError("Scenario repository not initialised")
scenarios = uow.scenarios.recent(limit=5, with_project=True)
for scenario in scenarios:
project_name = scenario.project.name if scenario.project else f"Project #{scenario.project_id}"
timestamp_label = _format_timestamp_with_time(scenario.updated_at)
updates.append(
{
"title": f"{scenario.name} · {scenario.status.value.title()}",
"description": f"Latest update recorded for {project_name}.",
"timestamp": scenario.updated_at,
"timestamp_label": timestamp_label,
}
)
return updates
def _load_scenario_alerts(
request: Request, uow: UnitOfWork
) -> list[dict[str, object]]:
alerts: list[dict[str, object]] = []
if not uow.scenarios:
raise RuntimeError("Scenario repository not initialised")
drafts = uow.scenarios.list_by_status(
ScenarioStatus.DRAFT, limit=3, with_project=True
)
for scenario in drafts:
project_name = scenario.project.name if scenario.project else f"Project #{scenario.project_id}"
alerts.append(
{
"title": f"Draft scenario: {scenario.name}",
"message": f"{project_name} has a scenario awaiting validation.",
"link": request.url_for(
"projects.view_project", project_id=scenario.project_id
),
}
)
if not alerts:
archived = uow.scenarios.list_by_status(
ScenarioStatus.ARCHIVED, limit=3, with_project=True
)
for scenario in archived:
project_name = scenario.project.name if scenario.project else f"Project #{scenario.project_id}"
alerts.append(
{
"title": f"Archived scenario: {scenario.name}",
"message": f"Review archived scenario insights for {project_name}.",
"link": request.url_for(
"scenarios.view_scenario", scenario_id=scenario.id
),
}
)
return alerts
@router.get("/", include_in_schema=False, name="dashboard.home", response_model=None)
def dashboard_home(
request: Request,
user: User | None = Depends(get_current_user),
uow: UnitOfWork = Depends(get_unit_of_work),
) -> HTMLResponse | RedirectResponse:
if user is None:
return RedirectResponse(request.url_for("auth.login_form"), status_code=303)
context = {
"metrics": _load_metrics(uow),
"recent_projects": _load_recent_projects(uow),
"simulation_updates": _load_simulation_updates(uow),
"scenario_alerts": _load_scenario_alerts(request, uow),
"export_modals": {
"projects": request.url_for("exports.modal", dataset="projects"),
"scenarios": request.url_for("exports.modal", dataset="scenarios"),
},
}
return templates.TemplateResponse(request, "dashboard.html", context)

363
routes/exports.py Normal file
View File

@@ -0,0 +1,363 @@
from __future__ import annotations
import logging
import time
from datetime import datetime, timezone
from typing import Annotated
from fastapi import APIRouter, Depends, HTTPException, Request, Response, status
from fastapi.responses import HTMLResponse, StreamingResponse
from dependencies import get_unit_of_work, require_any_role
from schemas.exports import (
ExportFormat,
ProjectExportRequest,
ScenarioExportRequest,
)
from services.export_serializers import (
export_projects_to_excel,
export_scenarios_to_excel,
stream_projects_to_csv,
stream_scenarios_to_csv,
)
from services.unit_of_work import UnitOfWork
from models.import_export_log import ImportExportLog
from monitoring.metrics import observe_export
from routes.template_filters import create_templates
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/exports", tags=["exports"])
templates = create_templates()
@router.get(
"/modal/{dataset}",
response_model=None,
response_class=HTMLResponse,
include_in_schema=False,
name="exports.modal",
)
async def export_modal(
dataset: str,
request: Request,
) -> HTMLResponse:
dataset = dataset.lower()
if dataset not in {"projects", "scenarios"}:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail="Unknown dataset")
submit_url = request.url_for(
"export_projects" if dataset == "projects" else "export_scenarios"
)
return templates.TemplateResponse(
request,
"exports/modal.html",
{
"dataset": dataset,
"submit_url": submit_url,
},
)
def _timestamp_suffix() -> str:
return datetime.now(timezone.utc).strftime("%Y%m%d-%H%M%S")
def _ensure_repository(repo, name: str):
if repo is None:
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"{name} repository unavailable")
return repo
def _record_export_audit(
*,
uow: UnitOfWork,
dataset: str,
status: str,
export_format: ExportFormat,
row_count: int,
filename: str | None,
) -> None:
try:
if uow.session is None:
return
log = ImportExportLog(
action="export",
dataset=dataset,
status=status,
filename=filename,
row_count=row_count,
detail=f"format={export_format.value}",
)
uow.session.add(log)
uow.commit()
except Exception:
# best-effort auditing, do not break exports
if uow.session is not None:
uow.session.rollback()
logger.exception(
"export.audit.failed",
extra={
"event": "export.audit",
"dataset": dataset,
"status": status,
"format": export_format.value,
},
)
@router.post(
"/projects",
status_code=status.HTTP_200_OK,
response_class=StreamingResponse,
dependencies=[Depends(require_any_role(
"admin", "project_manager", "analyst"))],
)
async def export_projects(
request: ProjectExportRequest,
uow: Annotated[UnitOfWork, Depends(get_unit_of_work)],
) -> Response:
project_repo = _ensure_repository(
getattr(uow, "projects", None), "Project")
start = time.perf_counter()
try:
projects = project_repo.filtered_for_export(request.filters)
except ValueError as exc:
_record_export_audit(
uow=uow,
dataset="projects",
status="failure",
export_format=request.format,
row_count=0,
filename=None,
)
logger.warning(
"export.validation_failed",
extra={
"event": "export",
"dataset": "projects",
"status": "validation_failed",
"format": request.format.value,
"error": str(exc),
},
)
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_CONTENT,
detail=str(exc),
) from exc
except Exception as exc:
_record_export_audit(
uow=uow,
dataset="projects",
status="failure",
export_format=request.format,
row_count=0,
filename=None,
)
logger.exception(
"export.failed",
extra={
"event": "export",
"dataset": "projects",
"status": "failure",
"format": request.format.value,
},
)
raise exc
filename = f"projects-{_timestamp_suffix()}"
if request.format == ExportFormat.CSV:
stream = stream_projects_to_csv(projects)
response = StreamingResponse(stream, media_type="text/csv")
response.headers["Content-Disposition"] = f"attachment; filename={filename}.csv"
_record_export_audit(
uow=uow,
dataset="projects",
status="success",
export_format=request.format,
row_count=len(projects),
filename=f"{filename}.csv",
)
logger.info(
"export",
extra={
"event": "export",
"dataset": "projects",
"status": "success",
"format": request.format.value,
"row_count": len(projects),
"filename": f"{filename}.csv",
},
)
observe_export(
dataset="projects",
status="success",
export_format=request.format.value,
seconds=time.perf_counter() - start,
)
return response
data = export_projects_to_excel(projects)
_record_export_audit(
uow=uow,
dataset="projects",
status="success",
export_format=request.format,
row_count=len(projects),
filename=f"{filename}.xlsx",
)
logger.info(
"export",
extra={
"event": "export",
"dataset": "projects",
"status": "success",
"format": request.format.value,
"row_count": len(projects),
"filename": f"{filename}.xlsx",
},
)
observe_export(
dataset="projects",
status="success",
export_format=request.format.value,
seconds=time.perf_counter() - start,
)
return StreamingResponse(
iter([data]),
media_type="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
headers={
"Content-Disposition": f"attachment; filename={filename}.xlsx",
},
)
@router.post(
"/scenarios",
status_code=status.HTTP_200_OK,
response_class=StreamingResponse,
dependencies=[Depends(require_any_role(
"admin", "project_manager", "analyst"))],
)
async def export_scenarios(
request: ScenarioExportRequest,
uow: Annotated[UnitOfWork, Depends(get_unit_of_work)],
) -> Response:
scenario_repo = _ensure_repository(
getattr(uow, "scenarios", None), "Scenario")
start = time.perf_counter()
try:
scenarios = scenario_repo.filtered_for_export(
request.filters, include_project=True)
except ValueError as exc:
_record_export_audit(
uow=uow,
dataset="scenarios",
status="failure",
export_format=request.format,
row_count=0,
filename=None,
)
logger.warning(
"export.validation_failed",
extra={
"event": "export",
"dataset": "scenarios",
"status": "validation_failed",
"format": request.format.value,
"error": str(exc),
},
)
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_CONTENT,
detail=str(exc),
) from exc
except Exception as exc:
_record_export_audit(
uow=uow,
dataset="scenarios",
status="failure",
export_format=request.format,
row_count=0,
filename=None,
)
logger.exception(
"export.failed",
extra={
"event": "export",
"dataset": "scenarios",
"status": "failure",
"format": request.format.value,
},
)
raise exc
filename = f"scenarios-{_timestamp_suffix()}"
if request.format == ExportFormat.CSV:
stream = stream_scenarios_to_csv(scenarios)
response = StreamingResponse(stream, media_type="text/csv")
response.headers["Content-Disposition"] = f"attachment; filename={filename}.csv"
_record_export_audit(
uow=uow,
dataset="scenarios",
status="success",
export_format=request.format,
row_count=len(scenarios),
filename=f"{filename}.csv",
)
logger.info(
"export",
extra={
"event": "export",
"dataset": "scenarios",
"status": "success",
"format": request.format.value,
"row_count": len(scenarios),
"filename": f"{filename}.csv",
},
)
observe_export(
dataset="scenarios",
status="success",
export_format=request.format.value,
seconds=time.perf_counter() - start,
)
return response
data = export_scenarios_to_excel(scenarios)
_record_export_audit(
uow=uow,
dataset="scenarios",
status="success",
export_format=request.format,
row_count=len(scenarios),
filename=f"{filename}.xlsx",
)
logger.info(
"export",
extra={
"event": "export",
"dataset": "scenarios",
"status": "success",
"format": request.format.value,
"row_count": len(scenarios),
"filename": f"{filename}.xlsx",
},
)
observe_export(
dataset="scenarios",
status="success",
export_format=request.format.value,
seconds=time.perf_counter() - start,
)
return StreamingResponse(
iter([data]),
media_type="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
headers={
"Content-Disposition": f"attachment; filename={filename}.xlsx",
},
)

170
routes/imports.py Normal file
View File

@@ -0,0 +1,170 @@
from __future__ import annotations
from io import BytesIO
from fastapi import APIRouter, Depends, File, HTTPException, UploadFile, status
from fastapi import Request
from fastapi.responses import HTMLResponse
from dependencies import (
get_import_ingestion_service,
require_roles,
require_roles_html,
)
from models import User
from schemas.imports import (
ImportCommitRequest,
ProjectImportCommitResponse,
ProjectImportPreviewResponse,
ScenarioImportCommitResponse,
ScenarioImportPreviewResponse,
)
from services.importers import ImportIngestionService, UnsupportedImportFormat
from routes.template_filters import create_templates
router = APIRouter(prefix="/imports", tags=["Imports"])
templates = create_templates()
MANAGE_ROLES = ("project_manager", "admin")
@router.get(
"/ui",
response_class=HTMLResponse,
include_in_schema=False,
name="imports.ui",
)
def import_dashboard(
request: Request,
_: User = Depends(require_roles_html(*MANAGE_ROLES)),
) -> HTMLResponse:
return templates.TemplateResponse(
request,
"imports/ui.html",
{
"title": "Imports",
},
)
async def _read_upload_file(upload: UploadFile) -> BytesIO:
content = await upload.read()
if not content:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Uploaded file is empty.",
)
return BytesIO(content)
@router.post(
"/projects/preview",
response_model=ProjectImportPreviewResponse,
status_code=status.HTTP_200_OK,
)
async def preview_project_import(
file: UploadFile = File(...,
description="Project import file (CSV or Excel)"),
_: User = Depends(require_roles(*MANAGE_ROLES)),
ingestion_service: ImportIngestionService = Depends(
get_import_ingestion_service),
) -> ProjectImportPreviewResponse:
if not file.filename:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Filename is required for import.",
)
stream = await _read_upload_file(file)
try:
preview = ingestion_service.preview_projects(stream, file.filename)
except UnsupportedImportFormat as exc:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=str(exc),
) from exc
return ProjectImportPreviewResponse.model_validate(preview)
@router.post(
"/scenarios/preview",
response_model=ScenarioImportPreviewResponse,
status_code=status.HTTP_200_OK,
)
async def preview_scenario_import(
file: UploadFile = File(...,
description="Scenario import file (CSV or Excel)"),
_: User = Depends(require_roles(*MANAGE_ROLES)),
ingestion_service: ImportIngestionService = Depends(
get_import_ingestion_service),
) -> ScenarioImportPreviewResponse:
if not file.filename:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Filename is required for import.",
)
stream = await _read_upload_file(file)
try:
preview = ingestion_service.preview_scenarios(stream, file.filename)
except UnsupportedImportFormat as exc:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=str(exc),
) from exc
return ScenarioImportPreviewResponse.model_validate(preview)
def _value_error_status(exc: ValueError) -> int:
detail = str(exc)
if detail.lower().startswith("unknown"):
return status.HTTP_404_NOT_FOUND
return status.HTTP_400_BAD_REQUEST
@router.post(
"/projects/commit",
response_model=ProjectImportCommitResponse,
status_code=status.HTTP_200_OK,
)
async def commit_project_import_endpoint(
payload: ImportCommitRequest,
_: User = Depends(require_roles(*MANAGE_ROLES)),
ingestion_service: ImportIngestionService = Depends(
get_import_ingestion_service),
) -> ProjectImportCommitResponse:
try:
result = ingestion_service.commit_project_import(payload.token)
except ValueError as exc:
raise HTTPException(
status_code=_value_error_status(exc),
detail=str(exc),
) from exc
return ProjectImportCommitResponse.model_validate(result)
@router.post(
"/scenarios/commit",
response_model=ScenarioImportCommitResponse,
status_code=status.HTTP_200_OK,
)
async def commit_scenario_import_endpoint(
payload: ImportCommitRequest,
_: User = Depends(require_roles(*MANAGE_ROLES)),
ingestion_service: ImportIngestionService = Depends(
get_import_ingestion_service),
) -> ScenarioImportCommitResponse:
try:
result = ingestion_service.commit_scenario_import(payload.token)
except ValueError as exc:
raise HTTPException(
status_code=_value_error_status(exc),
detail=str(exc),
) from exc
return ScenarioImportCommitResponse.model_validate(result)

63
routes/navigation.py Normal file
View File

@@ -0,0 +1,63 @@
from __future__ import annotations
from datetime import datetime, timezone
from fastapi import APIRouter, Depends, Request
from dependencies import (
get_auth_session,
get_navigation_service,
require_authenticated_user,
)
from models import User
from schemas.navigation import (
NavigationGroupSchema,
NavigationLinkSchema,
NavigationSidebarResponse,
)
from services.navigation import NavigationGroupDTO, NavigationLinkDTO, NavigationService
from services.session import AuthSession
router = APIRouter(prefix="/navigation", tags=["Navigation"])
def _to_link_schema(dto: NavigationLinkDTO) -> NavigationLinkSchema:
return NavigationLinkSchema(
id=dto.id,
label=dto.label,
href=dto.href,
match_prefix=dto.match_prefix,
icon=dto.icon,
tooltip=dto.tooltip,
is_external=dto.is_external,
children=[_to_link_schema(child) for child in dto.children],
)
def _to_group_schema(dto: NavigationGroupDTO) -> NavigationGroupSchema:
return NavigationGroupSchema(
id=dto.id,
label=dto.label,
icon=dto.icon,
tooltip=dto.tooltip,
links=[_to_link_schema(link) for link in dto.links],
)
@router.get(
"/sidebar",
response_model=NavigationSidebarResponse,
name="navigation.sidebar",
)
async def get_sidebar_navigation(
request: Request,
_: User = Depends(require_authenticated_user),
session: AuthSession = Depends(get_auth_session),
service: NavigationService = Depends(get_navigation_service),
) -> NavigationSidebarResponse:
dto = service.build_sidebar(session=session, request=request)
return NavigationSidebarResponse(
groups=[_to_group_schema(group) for group in dto.groups],
roles=list(dto.roles),
generated_at=datetime.now(tz=timezone.utc),
)

337
routes/projects.py Normal file
View File

@@ -0,0 +1,337 @@
from __future__ import annotations
from typing import List
from fastapi import APIRouter, Depends, Form, HTTPException, Request, status
from fastapi.responses import HTMLResponse, RedirectResponse
from dependencies import (
get_pricing_metadata,
get_unit_of_work,
require_any_role,
require_any_role_html,
require_project_resource,
require_project_resource_html,
require_roles,
require_roles_html,
)
from models import MiningOperationType, Project, ScenarioStatus, User
from schemas.project import ProjectCreate, ProjectRead, ProjectUpdate
from services.exceptions import EntityConflictError
from services.pricing import PricingMetadata
from services.unit_of_work import UnitOfWork
from routes.template_filters import create_templates
router = APIRouter(prefix="/projects", tags=["Projects"])
templates = create_templates()
READ_ROLES = ("viewer", "analyst", "project_manager", "admin")
MANAGE_ROLES = ("project_manager", "admin")
def _to_read_model(project: Project) -> ProjectRead:
return ProjectRead.model_validate(project)
def _require_project_repo(uow: UnitOfWork):
if not uow.projects:
raise RuntimeError("Project repository not initialised")
return uow.projects
def _operation_type_choices() -> list[tuple[str, str]]:
return [
(op.value, op.name.replace("_", " ").title()) for op in MiningOperationType
]
@router.get("", response_model=List[ProjectRead])
def list_projects(
_: User = Depends(require_any_role(*READ_ROLES)),
uow: UnitOfWork = Depends(get_unit_of_work),
) -> List[ProjectRead]:
projects = _require_project_repo(uow).list()
return [_to_read_model(project) for project in projects]
@router.post("", response_model=ProjectRead, status_code=status.HTTP_201_CREATED)
def create_project(
payload: ProjectCreate,
_: User = Depends(require_roles(*MANAGE_ROLES)),
uow: UnitOfWork = Depends(get_unit_of_work),
metadata: PricingMetadata = Depends(get_pricing_metadata),
) -> ProjectRead:
project = Project(**payload.model_dump())
try:
created = _require_project_repo(uow).create(project)
except EntityConflictError as exc:
raise HTTPException(
status_code=status.HTTP_409_CONFLICT, detail=str(exc)
) from exc
default_settings = uow.ensure_default_pricing_settings(
metadata=metadata).settings
uow.set_project_pricing_settings(created, default_settings)
return _to_read_model(created)
@router.get(
"/ui",
response_class=HTMLResponse,
include_in_schema=False,
name="projects.project_list_page",
)
def project_list_page(
request: Request,
_: User = Depends(require_any_role_html(*READ_ROLES)),
uow: UnitOfWork = Depends(get_unit_of_work),
) -> HTMLResponse:
projects = _require_project_repo(uow).list(with_children=True)
for project in projects:
setattr(project, "scenario_count", len(project.scenarios))
return templates.TemplateResponse(
request,
"projects/list.html",
{
"projects": projects,
},
)
@router.get(
"/create",
response_class=HTMLResponse,
include_in_schema=False,
name="projects.create_project_form",
)
def create_project_form(
request: Request,
_: User = Depends(require_roles_html(*MANAGE_ROLES)),
) -> HTMLResponse:
return templates.TemplateResponse(
request,
"projects/form.html",
{
"project": None,
"operation_types": _operation_type_choices(),
"form_action": request.url_for("projects.create_project_submit"),
"cancel_url": request.url_for("projects.project_list_page"),
},
)
@router.post(
"/create",
include_in_schema=False,
name="projects.create_project_submit",
)
def create_project_submit(
request: Request,
_: User = Depends(require_roles_html(*MANAGE_ROLES)),
name: str = Form(...),
location: str | None = Form(None),
operation_type: str = Form(...),
description: str | None = Form(None),
uow: UnitOfWork = Depends(get_unit_of_work),
metadata: PricingMetadata = Depends(get_pricing_metadata),
):
def _normalise(value: str | None) -> str | None:
if value is None:
return None
value = value.strip()
return value or None
try:
op_type = MiningOperationType(operation_type)
except ValueError:
return templates.TemplateResponse(
request,
"projects/form.html",
{
"project": None,
"operation_types": _operation_type_choices(),
"form_action": request.url_for("projects.create_project_submit"),
"cancel_url": request.url_for("projects.project_list_page"),
"error": "Invalid operation type.",
},
status_code=status.HTTP_400_BAD_REQUEST,
)
project = Project(
name=name.strip(),
location=_normalise(location),
operation_type=op_type,
description=_normalise(description),
)
try:
created = _require_project_repo(uow).create(project)
except EntityConflictError:
return templates.TemplateResponse(
request,
"projects/form.html",
{
"project": project,
"operation_types": _operation_type_choices(),
"form_action": request.url_for("projects.create_project_submit"),
"cancel_url": request.url_for("projects.project_list_page"),
"error": "Project with this name already exists.",
},
status_code=status.HTTP_409_CONFLICT,
)
default_settings = uow.ensure_default_pricing_settings(
metadata=metadata).settings
uow.set_project_pricing_settings(created, default_settings)
return RedirectResponse(
request.url_for("projects.project_list_page"),
status_code=status.HTTP_303_SEE_OTHER,
)
@router.get("/{project_id}", response_model=ProjectRead)
def get_project(project: Project = Depends(require_project_resource())) -> ProjectRead:
return _to_read_model(project)
@router.put("/{project_id}", response_model=ProjectRead)
def update_project(
payload: ProjectUpdate,
project: Project = Depends(
require_project_resource(require_manage=True)
),
uow: UnitOfWork = Depends(get_unit_of_work),
) -> ProjectRead:
update_data = payload.model_dump(exclude_unset=True)
for field, value in update_data.items():
setattr(project, field, value)
uow.flush()
return _to_read_model(project)
@router.delete("/{project_id}", status_code=status.HTTP_204_NO_CONTENT)
def delete_project(
project: Project = Depends(require_project_resource(require_manage=True)),
uow: UnitOfWork = Depends(get_unit_of_work),
) -> None:
_require_project_repo(uow).delete(project.id)
@router.get(
"/{project_id}/view",
response_class=HTMLResponse,
include_in_schema=False,
name="projects.view_project",
)
def view_project(
request: Request,
_: User = Depends(require_any_role_html(*READ_ROLES)),
project: Project = Depends(require_project_resource_html()),
uow: UnitOfWork = Depends(get_unit_of_work),
) -> HTMLResponse:
project = _require_project_repo(uow).get(project.id, with_children=True)
scenarios = sorted(project.scenarios, key=lambda s: s.created_at)
scenario_stats = {
"total": len(scenarios),
"active": sum(1 for scenario in scenarios if scenario.status == ScenarioStatus.ACTIVE),
"draft": sum(1 for scenario in scenarios if scenario.status == ScenarioStatus.DRAFT),
"archived": sum(1 for scenario in scenarios if scenario.status == ScenarioStatus.ARCHIVED),
"latest_update": max(
(scenario.updated_at for scenario in scenarios if scenario.updated_at),
default=None,
),
}
return templates.TemplateResponse(
request,
"projects/detail.html",
{
"project": project,
"scenarios": scenarios,
"scenario_stats": scenario_stats,
},
)
@router.get(
"/{project_id}/edit",
response_class=HTMLResponse,
include_in_schema=False,
name="projects.edit_project_form",
)
def edit_project_form(
request: Request,
_: User = Depends(require_roles_html(*MANAGE_ROLES)),
project: Project = Depends(
require_project_resource_html(require_manage=True)
),
) -> HTMLResponse:
return templates.TemplateResponse(
request,
"projects/form.html",
{
"project": project,
"operation_types": _operation_type_choices(),
"form_action": request.url_for(
"projects.edit_project_submit", project_id=project.id
),
"cancel_url": request.url_for(
"projects.view_project", project_id=project.id
),
},
)
@router.post(
"/{project_id}/edit",
include_in_schema=False,
name="projects.edit_project_submit",
)
def edit_project_submit(
request: Request,
_: User = Depends(require_roles_html(*MANAGE_ROLES)),
project: Project = Depends(
require_project_resource_html(require_manage=True)
),
name: str = Form(...),
location: str | None = Form(None),
operation_type: str | None = Form(None),
description: str | None = Form(None),
uow: UnitOfWork = Depends(get_unit_of_work),
):
def _normalise(value: str | None) -> str | None:
if value is None:
return None
value = value.strip()
return value or None
project.name = name.strip()
project.location = _normalise(location)
if operation_type:
try:
project.operation_type = MiningOperationType(operation_type)
except ValueError:
return templates.TemplateResponse(
request,
"projects/form.html",
{
"project": project,
"operation_types": _operation_type_choices(),
"form_action": request.url_for(
"projects.edit_project_submit", project_id=project.id
),
"cancel_url": request.url_for(
"projects.view_project", project_id=project.id
),
"error": "Invalid operation type.",
},
status_code=status.HTTP_400_BAD_REQUEST,
)
project.description = _normalise(description)
uow.flush()
return RedirectResponse(
request.url_for("projects.view_project", project_id=project.id),
status_code=status.HTTP_303_SEE_OTHER,
)

434
routes/reports.py Normal file
View File

@@ -0,0 +1,434 @@
from __future__ import annotations
from datetime import date
from fastapi import APIRouter, Depends, HTTPException, Query, Request, status
from fastapi.encoders import jsonable_encoder
from fastapi.responses import HTMLResponse
from dependencies import (
get_unit_of_work,
require_any_role,
require_any_role_html,
require_project_resource,
require_scenario_resource,
require_project_resource_html,
require_scenario_resource_html,
)
from models import Project, Scenario, User
from services.exceptions import EntityNotFoundError, ScenarioValidationError
from services.reporting import (
DEFAULT_ITERATIONS,
IncludeOptions,
ReportFilters,
ReportingService,
parse_include_tokens,
validate_percentiles,
)
from services.unit_of_work import UnitOfWork
from routes.template_filters import create_templates
router = APIRouter(prefix="/reports", tags=["Reports"])
templates = create_templates()
READ_ROLES = ("viewer", "analyst", "project_manager", "admin")
MANAGE_ROLES = ("project_manager", "admin")
@router.get("/projects/{project_id}", name="reports.project_summary")
def project_summary_report(
project: Project = Depends(require_project_resource()),
_: User = Depends(require_any_role(*READ_ROLES)),
uow: UnitOfWork = Depends(get_unit_of_work),
include: str | None = Query(
None,
description="Comma-separated include tokens (distribution,samples,all).",
),
scenario_ids: list[int] | None = Query(
None,
alias="scenario_ids",
description="Repeatable scenario identifier filter.",
),
start_date: date | None = Query(
None,
description="Filter scenarios starting on or after this date.",
),
end_date: date | None = Query(
None,
description="Filter scenarios ending on or before this date.",
),
fmt: str = Query(
"json",
alias="format",
description="Response format (json only for this endpoint).",
),
iterations: int | None = Query(
None,
gt=0,
description="Override Monte Carlo iteration count when distribution is included.",
),
percentiles: list[float] | None = Query(
None,
description="Percentiles (0-100) for Monte Carlo summaries when included.",
),
) -> dict[str, object]:
if fmt.lower() != "json":
raise HTTPException(
status_code=status.HTTP_406_NOT_ACCEPTABLE,
detail="Only JSON responses are supported; use the HTML endpoint for templates.",
)
include_options = parse_include_tokens(include)
try:
percentile_values = validate_percentiles(percentiles)
except ValueError as exc:
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_CONTENT,
detail=str(exc),
) from exc
scenario_filter = ReportFilters(
scenario_ids=set(scenario_ids) if scenario_ids else None,
start_date=start_date,
end_date=end_date,
)
service = ReportingService(uow)
report = service.project_summary(
project,
filters=scenario_filter,
include=include_options,
iterations=iterations or DEFAULT_ITERATIONS,
percentiles=percentile_values,
)
return jsonable_encoder(report)
@router.get(
"/projects/{project_id}/scenarios/compare",
name="reports.project_scenario_comparison",
)
def project_scenario_comparison_report(
project: Project = Depends(require_project_resource()),
_: User = Depends(require_any_role(*READ_ROLES)),
uow: UnitOfWork = Depends(get_unit_of_work),
scenario_ids: list[int] = Query(
..., alias="scenario_ids", description="Repeatable scenario identifier."),
include: str | None = Query(
None,
description="Comma-separated include tokens (distribution,samples,all).",
),
fmt: str = Query(
"json",
alias="format",
description="Response format (json only for this endpoint).",
),
iterations: int | None = Query(
None,
gt=0,
description="Override Monte Carlo iteration count when distribution is included.",
),
percentiles: list[float] | None = Query(
None,
description="Percentiles (0-100) for Monte Carlo summaries when included.",
),
) -> dict[str, object]:
unique_ids = list(dict.fromkeys(scenario_ids))
if len(unique_ids) < 2:
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_CONTENT,
detail="At least two unique scenario_ids must be provided for comparison.",
)
if fmt.lower() != "json":
raise HTTPException(
status_code=status.HTTP_406_NOT_ACCEPTABLE,
detail="Only JSON responses are supported; use the HTML endpoint for templates.",
)
include_options = parse_include_tokens(include)
try:
percentile_values = validate_percentiles(percentiles)
except ValueError as exc:
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_CONTENT,
detail=str(exc),
) from exc
try:
scenarios = uow.validate_scenarios_for_comparison(unique_ids)
except ScenarioValidationError as exc:
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_CONTENT,
detail={
"code": exc.code,
"message": exc.message,
"scenario_ids": list(exc.scenario_ids or []),
},
) from exc
except EntityNotFoundError as exc:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=str(exc),
) from exc
if any(scenario.project_id != project.id for scenario in scenarios):
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="One or more scenarios are not associated with this project.",
)
service = ReportingService(uow)
report = service.scenario_comparison(
project,
scenarios,
include=include_options,
iterations=iterations or DEFAULT_ITERATIONS,
percentiles=percentile_values,
)
return jsonable_encoder(report)
@router.get(
"/scenarios/{scenario_id}/distribution",
name="reports.scenario_distribution",
)
def scenario_distribution_report(
scenario: Scenario = Depends(require_scenario_resource()),
_: User = Depends(require_any_role(*READ_ROLES)),
uow: UnitOfWork = Depends(get_unit_of_work),
include: str | None = Query(
None,
description="Comma-separated include tokens (samples,all).",
),
fmt: str = Query(
"json",
alias="format",
description="Response format (json only for this endpoint).",
),
iterations: int | None = Query(
None,
gt=0,
description="Override Monte Carlo iteration count (default applies otherwise).",
),
percentiles: list[float] | None = Query(
None,
description="Percentiles (0-100) for Monte Carlo summaries.",
),
) -> dict[str, object]:
if fmt.lower() != "json":
raise HTTPException(
status_code=status.HTTP_406_NOT_ACCEPTABLE,
detail="Only JSON responses are supported; use the HTML endpoint for templates.",
)
requested = parse_include_tokens(include)
include_options = IncludeOptions(
distribution=True, samples=requested.samples)
try:
percentile_values = validate_percentiles(percentiles)
except ValueError as exc:
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_CONTENT,
detail=str(exc),
) from exc
service = ReportingService(uow)
report = service.scenario_distribution(
scenario,
include=include_options,
iterations=iterations or DEFAULT_ITERATIONS,
percentiles=percentile_values,
)
return jsonable_encoder(report)
@router.get(
"/projects/{project_id}/ui",
response_class=HTMLResponse,
include_in_schema=False,
name="reports.project_summary_page",
)
def project_summary_page(
request: Request,
project: Project = Depends(require_project_resource_html()),
_: User = Depends(require_any_role_html(*READ_ROLES)),
uow: UnitOfWork = Depends(get_unit_of_work),
include: str | None = Query(
None,
description="Comma-separated include tokens (distribution,samples,all).",
),
scenario_ids: list[int] | None = Query(
None,
alias="scenario_ids",
description="Repeatable scenario identifier filter.",
),
start_date: date | None = Query(
None,
description="Filter scenarios starting on or after this date.",
),
end_date: date | None = Query(
None,
description="Filter scenarios ending on or before this date.",
),
iterations: int | None = Query(
None,
gt=0,
description="Override Monte Carlo iteration count when distribution is included.",
),
percentiles: list[float] | None = Query(
None,
description="Percentiles (0-100) for Monte Carlo summaries when included.",
),
) -> HTMLResponse:
include_options = parse_include_tokens(include)
try:
percentile_values = validate_percentiles(percentiles)
except ValueError as exc:
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_CONTENT,
detail=str(exc),
) from exc
scenario_filter = ReportFilters(
scenario_ids=set(scenario_ids) if scenario_ids else None,
start_date=start_date,
end_date=end_date,
)
service = ReportingService(uow)
context = service.build_project_summary_context(
project, scenario_filter, include_options, iterations or DEFAULT_ITERATIONS, percentile_values, request
)
return templates.TemplateResponse(
request,
"reports/project_summary.html",
context,
)
@router.get(
"/projects/{project_id}/scenarios/compare/ui",
response_class=HTMLResponse,
include_in_schema=False,
name="reports.project_scenario_comparison_page",
)
def project_scenario_comparison_page(
request: Request,
project: Project = Depends(require_project_resource_html()),
_: User = Depends(require_any_role_html(*READ_ROLES)),
uow: UnitOfWork = Depends(get_unit_of_work),
scenario_ids: list[int] = Query(
..., alias="scenario_ids", description="Repeatable scenario identifier."),
include: str | None = Query(
None,
description="Comma-separated include tokens (distribution,samples,all).",
),
iterations: int | None = Query(
None,
gt=0,
description="Override Monte Carlo iteration count when distribution is included.",
),
percentiles: list[float] | None = Query(
None,
description="Percentiles (0-100) for Monte Carlo summaries when included.",
),
) -> HTMLResponse:
unique_ids = list(dict.fromkeys(scenario_ids))
if len(unique_ids) < 2:
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_CONTENT,
detail="At least two unique scenario_ids must be provided for comparison.",
)
include_options = parse_include_tokens(include)
try:
percentile_values = validate_percentiles(percentiles)
except ValueError as exc:
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_CONTENT,
detail=str(exc),
) from exc
try:
scenarios = uow.validate_scenarios_for_comparison(unique_ids)
except ScenarioValidationError as exc:
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_CONTENT,
detail={
"code": exc.code,
"message": exc.message,
"scenario_ids": list(exc.scenario_ids or []),
},
) from exc
except EntityNotFoundError as exc:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=str(exc),
) from exc
if any(scenario.project_id != project.id for scenario in scenarios):
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="One or more scenarios are not associated with this project.",
)
service = ReportingService(uow)
context = service.build_scenario_comparison_context(
project, scenarios, include_options, iterations or DEFAULT_ITERATIONS, percentile_values, request
)
return templates.TemplateResponse(
request,
"reports/scenario_comparison.html",
context,
)
@router.get(
"/scenarios/{scenario_id}/distribution/ui",
response_class=HTMLResponse,
include_in_schema=False,
name="reports.scenario_distribution_page",
)
def scenario_distribution_page(
request: Request,
_: User = Depends(require_any_role_html(*READ_ROLES)),
scenario: Scenario = Depends(
require_scenario_resource_html()
),
uow: UnitOfWork = Depends(get_unit_of_work),
include: str | None = Query(
None,
description="Comma-separated include tokens (samples,all).",
),
iterations: int | None = Query(
None,
gt=0,
description="Override Monte Carlo iteration count (default applies otherwise).",
),
percentiles: list[float] | None = Query(
None,
description="Percentiles (0-100) for Monte Carlo summaries.",
),
) -> HTMLResponse:
requested = parse_include_tokens(include)
include_options = IncludeOptions(
distribution=True, samples=requested.samples)
try:
percentile_values = validate_percentiles(percentiles)
except ValueError as exc:
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_CONTENT,
detail=str(exc),
) from exc
service = ReportingService(uow)
context = service.build_scenario_distribution_context(
scenario, include_options, iterations or DEFAULT_ITERATIONS, percentile_values, request
)
return templates.TemplateResponse(
request,
"reports/scenario_distribution.html",
context,
)

656
routes/scenarios.py Normal file
View File

@@ -0,0 +1,656 @@
from __future__ import annotations
from datetime import date
from types import SimpleNamespace
from typing import List
from fastapi import APIRouter, Depends, Form, HTTPException, Request, status
from fastapi.responses import HTMLResponse, RedirectResponse
from dependencies import (
get_pricing_metadata,
get_unit_of_work,
require_any_role,
require_any_role_html,
require_roles,
require_roles_html,
require_scenario_resource,
require_scenario_resource_html,
)
from models import ResourceType, Scenario, ScenarioStatus, User
from schemas.scenario import (
ScenarioComparisonRequest,
ScenarioComparisonResponse,
ScenarioCreate,
ScenarioRead,
ScenarioUpdate,
)
from services.currency import CurrencyValidationError, normalise_currency
from services.exceptions import (
EntityConflictError,
EntityNotFoundError,
ScenarioValidationError,
)
from services.pricing import PricingMetadata
from services.unit_of_work import UnitOfWork
from routes.template_filters import create_templates
router = APIRouter(tags=["Scenarios"])
templates = create_templates()
READ_ROLES = ("viewer", "analyst", "project_manager", "admin")
MANAGE_ROLES = ("project_manager", "admin")
def _to_read_model(scenario: Scenario) -> ScenarioRead:
return ScenarioRead.model_validate(scenario)
def _resource_type_choices() -> list[tuple[str, str]]:
return [
(resource.value, resource.value.replace("_", " ").title())
for resource in ResourceType
]
def _scenario_status_choices() -> list[tuple[str, str]]:
return [
(status.value, status.value.title()) for status in ScenarioStatus
]
def _require_project_repo(uow: UnitOfWork):
if not uow.projects:
raise RuntimeError("Project repository not initialised")
return uow.projects
def _require_scenario_repo(uow: UnitOfWork):
if not uow.scenarios:
raise RuntimeError("Scenario repository not initialised")
return uow.scenarios
@router.get(
"/projects/{project_id}/scenarios",
response_model=List[ScenarioRead],
)
def list_scenarios_for_project(
project_id: int,
_: User = Depends(require_any_role(*READ_ROLES)),
uow: UnitOfWork = Depends(get_unit_of_work),
) -> List[ScenarioRead]:
project_repo = _require_project_repo(uow)
scenario_repo = _require_scenario_repo(uow)
try:
project_repo.get(project_id)
except EntityNotFoundError as exc:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail=str(exc)) from exc
scenarios = scenario_repo.list_for_project(project_id)
return [_to_read_model(scenario) for scenario in scenarios]
@router.post(
"/projects/{project_id}/scenarios/compare",
response_model=ScenarioComparisonResponse,
status_code=status.HTTP_200_OK,
)
def compare_scenarios(
project_id: int,
payload: ScenarioComparisonRequest,
_: User = Depends(require_any_role(*READ_ROLES)),
uow: UnitOfWork = Depends(get_unit_of_work),
) -> ScenarioComparisonResponse:
try:
_require_project_repo(uow).get(project_id)
except EntityNotFoundError as exc:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail=str(exc)
) from exc
try:
scenarios = uow.validate_scenarios_for_comparison(payload.scenario_ids)
if any(scenario.project_id != project_id for scenario in scenarios):
raise ScenarioValidationError(
code="SCENARIO_PROJECT_MISMATCH",
message="Selected scenarios do not belong to the same project.",
scenario_ids=[
scenario.id for scenario in scenarios if scenario.id is not None
],
)
except EntityNotFoundError as exc:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail=str(exc)
) from exc
except ScenarioValidationError as exc:
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_CONTENT,
detail={
"code": exc.code,
"message": exc.message,
"scenario_ids": list(exc.scenario_ids or []),
},
) from exc
return ScenarioComparisonResponse(
project_id=project_id,
scenarios=[_to_read_model(scenario) for scenario in scenarios],
)
@router.post(
"/projects/{project_id}/scenarios",
response_model=ScenarioRead,
status_code=status.HTTP_201_CREATED,
)
def create_scenario_for_project(
project_id: int,
payload: ScenarioCreate,
_: User = Depends(require_roles(*MANAGE_ROLES)),
uow: UnitOfWork = Depends(get_unit_of_work),
metadata: PricingMetadata = Depends(get_pricing_metadata),
) -> ScenarioRead:
project_repo = _require_project_repo(uow)
scenario_repo = _require_scenario_repo(uow)
try:
project_repo.get(project_id)
except EntityNotFoundError as exc:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail=str(exc)) from exc
scenario_data = payload.model_dump()
if not scenario_data.get("currency") and metadata.default_currency:
scenario_data["currency"] = metadata.default_currency
scenario = Scenario(project_id=project_id, **scenario_data)
try:
created = scenario_repo.create(scenario)
except EntityConflictError as exc:
raise HTTPException(
status_code=status.HTTP_409_CONFLICT, detail=str(exc)) from exc
return _to_read_model(created)
@router.get(
"/projects/{project_id}/scenarios/ui",
response_class=HTMLResponse,
include_in_schema=False,
name="scenarios.project_scenario_list",
)
def project_scenario_list_page(
project_id: int,
request: Request,
_: User = Depends(require_any_role_html(*READ_ROLES)),
uow: UnitOfWork = Depends(get_unit_of_work),
) -> HTMLResponse:
try:
project = _require_project_repo(uow).get(
project_id, with_children=True)
except EntityNotFoundError as exc:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail=str(exc)
) from exc
scenarios = sorted(
project.scenarios,
key=lambda scenario: scenario.updated_at or scenario.created_at,
reverse=True,
)
scenario_totals = {
"total": len(scenarios),
"active": sum(
1 for scenario in scenarios if scenario.status == ScenarioStatus.ACTIVE
),
"draft": sum(
1 for scenario in scenarios if scenario.status == ScenarioStatus.DRAFT
),
"archived": sum(
1 for scenario in scenarios if scenario.status == ScenarioStatus.ARCHIVED
),
"latest_update": max(
(
scenario.updated_at or scenario.created_at
for scenario in scenarios
if scenario.updated_at or scenario.created_at
),
default=None,
),
}
return templates.TemplateResponse(
request,
"scenarios/list.html",
{
"project": project,
"scenarios": scenarios,
"scenario_totals": scenario_totals,
},
)
@router.get("/scenarios/{scenario_id}", response_model=ScenarioRead)
def get_scenario(
scenario: Scenario = Depends(require_scenario_resource()),
) -> ScenarioRead:
return _to_read_model(scenario)
@router.put("/scenarios/{scenario_id}", response_model=ScenarioRead)
def update_scenario(
payload: ScenarioUpdate,
scenario: Scenario = Depends(
require_scenario_resource(require_manage=True)
),
uow: UnitOfWork = Depends(get_unit_of_work),
) -> ScenarioRead:
update_data = payload.model_dump(exclude_unset=True)
for field, value in update_data.items():
setattr(scenario, field, value)
uow.flush()
return _to_read_model(scenario)
@router.delete("/scenarios/{scenario_id}", status_code=status.HTTP_204_NO_CONTENT)
def delete_scenario(
scenario: Scenario = Depends(
require_scenario_resource(require_manage=True)
),
uow: UnitOfWork = Depends(get_unit_of_work),
) -> None:
_require_scenario_repo(uow).delete(scenario.id)
def _normalise(value: str | None) -> str | None:
if value is None:
return None
value = value.strip()
return value or None
def _parse_date(value: str | None) -> date | None:
value = _normalise(value)
if not value:
return None
return date.fromisoformat(value)
def _parse_discount_rate(value: str | None) -> float | None:
value = _normalise(value)
if not value:
return None
try:
return float(value)
except ValueError:
return None
def _scenario_form_state(
*,
project_id: int,
name: str,
description: str | None,
status: ScenarioStatus,
start_date: date | None,
end_date: date | None,
discount_rate: float | None,
currency: str | None,
primary_resource: ResourceType | None,
scenario_id: int | None = None,
) -> SimpleNamespace:
return SimpleNamespace(
id=scenario_id,
project_id=project_id,
name=name,
description=description,
status=status,
start_date=start_date,
end_date=end_date,
discount_rate=discount_rate,
currency=currency,
primary_resource=primary_resource,
)
@router.get(
"/projects/{project_id}/scenarios/new",
response_class=HTMLResponse,
include_in_schema=False,
name="scenarios.create_scenario_form",
)
def create_scenario_form(
project_id: int,
request: Request,
_: User = Depends(require_roles_html(*MANAGE_ROLES)),
uow: UnitOfWork = Depends(get_unit_of_work),
metadata: PricingMetadata = Depends(get_pricing_metadata),
) -> HTMLResponse:
try:
project = _require_project_repo(uow).get(project_id)
except EntityNotFoundError as exc:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail=str(exc)
) from exc
return templates.TemplateResponse(
request,
"scenarios/form.html",
{
"project": project,
"scenario": None,
"scenario_statuses": _scenario_status_choices(),
"resource_types": _resource_type_choices(),
"form_action": request.url_for(
"scenarios.create_scenario_submit", project_id=project_id
),
"cancel_url": request.url_for(
"projects.view_project", project_id=project_id
),
"default_currency": metadata.default_currency,
},
)
@router.post(
"/projects/{project_id}/scenarios/new",
include_in_schema=False,
name="scenarios.create_scenario_submit",
)
def create_scenario_submit(
project_id: int,
request: Request,
_: User = Depends(require_roles_html(*MANAGE_ROLES)),
name: str = Form(...),
description: str | None = Form(None),
status_value: str = Form(ScenarioStatus.DRAFT.value),
start_date: str | None = Form(None),
end_date: str | None = Form(None),
discount_rate: str | None = Form(None),
currency: str | None = Form(None),
primary_resource: str | None = Form(None),
uow: UnitOfWork = Depends(get_unit_of_work),
metadata: PricingMetadata = Depends(get_pricing_metadata),
):
project_repo = _require_project_repo(uow)
scenario_repo = _require_scenario_repo(uow)
try:
project = project_repo.get(project_id)
except EntityNotFoundError as exc:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail=str(exc)
) from exc
try:
status_enum = ScenarioStatus(status_value)
except ValueError:
status_enum = ScenarioStatus.DRAFT
resource_enum = None
if primary_resource:
try:
resource_enum = ResourceType(primary_resource)
except ValueError:
resource_enum = None
name_value = name.strip()
description_value = _normalise(description)
start_date_value = _parse_date(start_date)
end_date_value = _parse_date(end_date)
discount_rate_value = _parse_discount_rate(discount_rate)
currency_input = _normalise(currency)
effective_currency = currency_input or metadata.default_currency
try:
currency_value = (
normalise_currency(effective_currency)
if effective_currency else None
)
except CurrencyValidationError as exc:
form_state = _scenario_form_state(
project_id=project_id,
name=name_value,
description=description_value,
status=status_enum,
start_date=start_date_value,
end_date=end_date_value,
discount_rate=discount_rate_value,
currency=currency_input or metadata.default_currency,
primary_resource=resource_enum,
)
return templates.TemplateResponse(
request,
"scenarios/form.html",
{
"project": project,
"scenario": form_state,
"scenario_statuses": _scenario_status_choices(),
"resource_types": _resource_type_choices(),
"form_action": request.url_for(
"scenarios.create_scenario_submit", project_id=project_id
),
"cancel_url": request.url_for(
"projects.view_project", project_id=project_id
),
"error": str(exc),
"error_field": "currency",
"default_currency": metadata.default_currency,
},
status_code=status.HTTP_400_BAD_REQUEST,
)
scenario = Scenario(
project_id=project_id,
name=name_value,
description=description_value,
status=status_enum,
start_date=start_date_value,
end_date=end_date_value,
discount_rate=discount_rate_value,
currency=currency_value,
primary_resource=resource_enum,
)
try:
scenario_repo.create(scenario)
except EntityConflictError:
return templates.TemplateResponse(
request,
"scenarios/form.html",
{
"project": project,
"scenario": scenario,
"scenario_statuses": _scenario_status_choices(),
"resource_types": _resource_type_choices(),
"form_action": request.url_for(
"scenarios.create_scenario_submit", project_id=project_id
),
"cancel_url": request.url_for(
"projects.view_project", project_id=project_id
),
"error": "Scenario with this name already exists for this project.",
"error_field": "name",
"default_currency": metadata.default_currency,
},
status_code=status.HTTP_409_CONFLICT,
)
return RedirectResponse(
request.url_for("projects.view_project", project_id=project_id),
status_code=status.HTTP_303_SEE_OTHER,
)
@router.get(
"/scenarios/{scenario_id}/view",
response_class=HTMLResponse,
include_in_schema=False,
name="scenarios.view_scenario",
)
def view_scenario(
request: Request,
_: User = Depends(require_any_role_html(*READ_ROLES)),
scenario: Scenario = Depends(
require_scenario_resource_html(with_children=True)
),
uow: UnitOfWork = Depends(get_unit_of_work),
) -> HTMLResponse:
project = _require_project_repo(uow).get(scenario.project_id)
financial_inputs = sorted(
scenario.financial_inputs, key=lambda item: item.created_at
)
simulation_parameters = sorted(
scenario.simulation_parameters, key=lambda item: item.created_at
)
scenario_metrics = {
"financial_count": len(financial_inputs),
"parameter_count": len(simulation_parameters),
"currency": scenario.currency,
"primary_resource": scenario.primary_resource.value.replace('_', ' ').title() if scenario.primary_resource else None,
}
return templates.TemplateResponse(
request,
"scenarios/detail.html",
{
"project": project,
"scenario": scenario,
"scenario_metrics": scenario_metrics,
"financial_inputs": financial_inputs,
"simulation_parameters": simulation_parameters,
},
)
@router.get(
"/scenarios/{scenario_id}/edit",
response_class=HTMLResponse,
include_in_schema=False,
name="scenarios.edit_scenario_form",
)
def edit_scenario_form(
request: Request,
_: User = Depends(require_roles_html(*MANAGE_ROLES)),
scenario: Scenario = Depends(
require_scenario_resource_html(require_manage=True)
),
uow: UnitOfWork = Depends(get_unit_of_work),
metadata: PricingMetadata = Depends(get_pricing_metadata),
) -> HTMLResponse:
project = _require_project_repo(uow).get(scenario.project_id)
return templates.TemplateResponse(
request,
"scenarios/form.html",
{
"project": project,
"scenario": scenario,
"scenario_statuses": _scenario_status_choices(),
"resource_types": _resource_type_choices(),
"form_action": request.url_for(
"scenarios.edit_scenario_submit", scenario_id=scenario.id
),
"cancel_url": request.url_for(
"scenarios.view_scenario", scenario_id=scenario.id
),
"default_currency": metadata.default_currency,
},
)
@router.post(
"/scenarios/{scenario_id}/edit",
include_in_schema=False,
name="scenarios.edit_scenario_submit",
)
def edit_scenario_submit(
request: Request,
_: User = Depends(require_roles_html(*MANAGE_ROLES)),
scenario: Scenario = Depends(
require_scenario_resource_html(require_manage=True)
),
name: str = Form(...),
description: str | None = Form(None),
status_value: str = Form(ScenarioStatus.DRAFT.value),
start_date: str | None = Form(None),
end_date: str | None = Form(None),
discount_rate: str | None = Form(None),
currency: str | None = Form(None),
primary_resource: str | None = Form(None),
uow: UnitOfWork = Depends(get_unit_of_work),
metadata: PricingMetadata = Depends(get_pricing_metadata),
):
project = _require_project_repo(uow).get(scenario.project_id)
name_value = name.strip()
description_value = _normalise(description)
try:
scenario.status = ScenarioStatus(status_value)
except ValueError:
scenario.status = ScenarioStatus.DRAFT
status_enum = scenario.status
resource_enum = None
if primary_resource:
try:
resource_enum = ResourceType(primary_resource)
except ValueError:
resource_enum = None
start_date_value = _parse_date(start_date)
end_date_value = _parse_date(end_date)
discount_rate_value = _parse_discount_rate(discount_rate)
currency_input = _normalise(currency)
try:
currency_value = normalise_currency(currency_input)
except CurrencyValidationError as exc:
form_state = _scenario_form_state(
scenario_id=scenario.id,
project_id=scenario.project_id,
name=name_value,
description=description_value,
status=status_enum,
start_date=start_date_value,
end_date=end_date_value,
discount_rate=discount_rate_value,
currency=currency_input,
primary_resource=resource_enum,
)
return templates.TemplateResponse(
request,
"scenarios/form.html",
{
"project": project,
"scenario": form_state,
"scenario_statuses": _scenario_status_choices(),
"resource_types": _resource_type_choices(),
"form_action": request.url_for(
"scenarios.edit_scenario_submit", scenario_id=scenario.id
),
"cancel_url": request.url_for(
"scenarios.view_scenario", scenario_id=scenario.id
),
"error": str(exc),
"error_field": "currency",
"default_currency": metadata.default_currency,
},
status_code=status.HTTP_400_BAD_REQUEST,
)
scenario.name = name_value
scenario.description = description_value
scenario.start_date = start_date_value
scenario.end_date = end_date_value
scenario.discount_rate = discount_rate_value
scenario.currency = currency_value
scenario.primary_resource = resource_enum
uow.flush()
return RedirectResponse(
request.url_for("scenarios.view_scenario", scenario_id=scenario.id),
status_code=status.HTTP_303_SEE_OTHER,
)

147
routes/template_filters.py Normal file
View File

@@ -0,0 +1,147 @@
from __future__ import annotations
import logging
from datetime import datetime, timezone
from typing import Any
from fastapi import Request
from fastapi.templating import Jinja2Templates
from services.navigation import NavigationService
from services.session import AuthSession
from services.unit_of_work import UnitOfWork
logger = logging.getLogger(__name__)
def format_datetime(value: Any) -> str:
"""Render datetime values consistently for templates."""
if not isinstance(value, datetime):
return ""
if value.tzinfo is None:
value = value.replace(tzinfo=timezone.utc)
return value.strftime("%Y-%m-%d %H:%M UTC")
def currency_display(value: Any, currency_code: str | None) -> str:
"""Format numeric values with currency context."""
if value is None:
return ""
if isinstance(value, (int, float)):
formatted_value = f"{value:,.2f}"
else:
formatted_value = str(value)
if currency_code:
return f"{currency_code} {formatted_value}"
return formatted_value
def format_metric(value: Any, metric_name: str, currency_code: str | None = None) -> str:
"""Format metrics according to their semantic type."""
if value is None:
return ""
currency_metrics = {
"npv",
"inflows",
"outflows",
"net",
"total_inflows",
"total_outflows",
"total_net",
}
if metric_name in currency_metrics and currency_code:
return currency_display(value, currency_code)
percentage_metrics = {"irr", "payback_period"}
if metric_name in percentage_metrics:
if isinstance(value, (int, float)):
return f"{value:.2f}%"
return f"{value}%"
if isinstance(value, (int, float)):
return f"{value:,.2f}"
return str(value)
def percentage_display(value: Any) -> str:
"""Format numeric values as percentages."""
if value is None:
return ""
if isinstance(value, (int, float)):
return f"{value:.2f}%"
return f"{value}%"
def period_display(value: Any) -> str:
"""Format period values in years."""
if value is None:
return ""
if isinstance(value, (int, float)):
if value == int(value):
return f"{int(value)} years"
return f"{value:.1f} years"
return str(value)
def register_common_filters(templates: Jinja2Templates) -> None:
templates.env.filters["format_datetime"] = format_datetime
templates.env.filters["currency_display"] = currency_display
templates.env.filters["format_metric"] = format_metric
templates.env.filters["percentage_display"] = percentage_display
templates.env.filters["period_display"] = period_display
def _sidebar_navigation_for_request(request: Request | None):
if request is None:
return None
cached = getattr(request.state, "_navigation_sidebar_dto", None)
if cached is not None:
return cached
session_context = getattr(request.state, "auth_session", None)
if isinstance(session_context, AuthSession):
session = session_context
else:
session = AuthSession.anonymous()
try:
with UnitOfWork() as uow:
if not uow.navigation:
logger.debug("Navigation repository unavailable for sidebar rendering")
sidebar_dto = None
else:
service = NavigationService(uow.navigation)
sidebar_dto = service.build_sidebar(session=session, request=request)
except Exception: # pragma: no cover - defensive fallback for templates
logger.exception("Failed to build sidebar navigation during template render")
sidebar_dto = None
setattr(request.state, "_navigation_sidebar_dto", sidebar_dto)
return sidebar_dto
def register_navigation_globals(templates: Jinja2Templates) -> None:
templates.env.globals["get_sidebar_navigation"] = _sidebar_navigation_for_request
def create_templates() -> Jinja2Templates:
templates = Jinja2Templates(directory="templates")
register_common_filters(templates)
register_navigation_globals(templates)
return templates
__all__ = [
"format_datetime",
"currency_display",
"format_metric",
"percentage_display",
"period_display",
"register_common_filters",
"register_navigation_globals",
"create_templates",
]

109
routes/ui.py Normal file
View File

@@ -0,0 +1,109 @@
from __future__ import annotations
from fastapi import APIRouter, Depends, Request
from fastapi.responses import HTMLResponse
from dependencies import require_any_role_html, require_roles_html
from models import User
from routes.template_filters import create_templates
router = APIRouter(tags=["UI"])
templates = create_templates()
READ_ROLES = ("viewer", "analyst", "project_manager", "admin")
MANAGE_ROLES = ("project_manager", "admin")
@router.get(
"/ui/simulations",
response_class=HTMLResponse,
include_in_schema=False,
name="ui.simulations",
)
def simulations_dashboard(
request: Request,
_: User = Depends(require_any_role_html(*READ_ROLES)),
) -> HTMLResponse:
return templates.TemplateResponse(
request,
"simulations.html",
{
"title": "Simulations",
},
)
@router.get(
"/ui/reporting",
response_class=HTMLResponse,
include_in_schema=False,
name="ui.reporting",
)
def reporting_dashboard(
request: Request,
_: User = Depends(require_any_role_html(*READ_ROLES)),
) -> HTMLResponse:
return templates.TemplateResponse(
request,
"reporting.html",
{
"title": "Reporting",
},
)
@router.get(
"/ui/settings",
response_class=HTMLResponse,
include_in_schema=False,
name="ui.settings",
)
def settings_page(
request: Request,
_: User = Depends(require_any_role_html(*READ_ROLES)),
) -> HTMLResponse:
return templates.TemplateResponse(
request,
"settings.html",
{
"title": "Settings",
},
)
@router.get(
"/theme-settings",
response_class=HTMLResponse,
include_in_schema=False,
name="ui.theme_settings",
)
def theme_settings_page(
request: Request,
_: User = Depends(require_any_role_html(*READ_ROLES)),
) -> HTMLResponse:
return templates.TemplateResponse(
request,
"theme_settings.html",
{
"title": "Theme Settings",
},
)
@router.get(
"/ui/currencies",
response_class=HTMLResponse,
include_in_schema=False,
name="ui.currencies",
)
def currencies_page(
request: Request,
_: User = Depends(require_roles_html(*MANAGE_ROLES)),
) -> HTMLResponse:
return templates.TemplateResponse(
request,
"currencies.html",
{
"title": "Currency Management",
},
)

67
schemas/auth.py Normal file
View File

@@ -0,0 +1,67 @@
from __future__ import annotations
from pydantic import BaseModel, ConfigDict, Field, ValidationInfo, field_validator
class FormModel(BaseModel):
"""Base Pydantic model for HTML form submissions."""
model_config = ConfigDict(extra="forbid", str_strip_whitespace=True)
class RegistrationForm(FormModel):
username: str = Field(min_length=3, max_length=128)
email: str = Field(min_length=5, max_length=255)
password: str = Field(min_length=8, max_length=256)
confirm_password: str
@field_validator("email")
@classmethod
def validate_email(cls, value: str) -> str:
if "@" not in value or value.startswith("@") or value.endswith("@"):
raise ValueError("Invalid email address.")
local, domain = value.split("@", 1)
if not local or "." not in domain:
raise ValueError("Invalid email address.")
return value.lower()
@field_validator("confirm_password")
@classmethod
def passwords_match(cls, value: str, info: ValidationInfo) -> str:
password = info.data.get("password")
if password != value:
raise ValueError("Passwords do not match.")
return value
class LoginForm(FormModel):
username: str = Field(min_length=1, max_length=255)
password: str = Field(min_length=1, max_length=256)
class PasswordResetRequestForm(FormModel):
email: str = Field(min_length=5, max_length=255)
@field_validator("email")
@classmethod
def validate_email(cls, value: str) -> str:
if "@" not in value or value.startswith("@") or value.endswith("@"):
raise ValueError("Invalid email address.")
local, domain = value.split("@", 1)
if not local or "." not in domain:
raise ValueError("Invalid email address.")
return value.lower()
class PasswordResetForm(FormModel):
token: str = Field(min_length=1)
password: str = Field(min_length=8, max_length=256)
confirm_password: str
@field_validator("confirm_password")
@classmethod
def reset_passwords_match(cls, value: str, info: ValidationInfo) -> str:
password = info.data.get("password")
if password != value:
raise ValueError("Passwords do not match.")
return value

346
schemas/calculations.py Normal file
View File

@@ -0,0 +1,346 @@
"""Pydantic schemas for calculation workflows."""
from __future__ import annotations
from typing import List
from pydantic import BaseModel, Field, PositiveFloat, ValidationError, field_validator
from services.pricing import PricingResult
class ImpurityInput(BaseModel):
"""Impurity configuration row supplied by the client."""
name: str = Field(..., min_length=1)
value: float | None = Field(None, ge=0)
threshold: float | None = Field(None, ge=0)
penalty: float | None = Field(None)
@field_validator("name")
@classmethod
def _normalise_name(cls, value: str) -> str:
return value.strip()
class ProfitabilityCalculationRequest(BaseModel):
"""Request payload for profitability calculations."""
metal: str = Field(..., min_length=1)
ore_tonnage: PositiveFloat
head_grade_pct: float = Field(..., gt=0, le=100)
recovery_pct: float = Field(..., gt=0, le=100)
payable_pct: float | None = Field(None, gt=0, le=100)
reference_price: PositiveFloat
treatment_charge: float = Field(0, ge=0)
smelting_charge: float = Field(0, ge=0)
moisture_pct: float = Field(0, ge=0, le=100)
moisture_threshold_pct: float | None = Field(None, ge=0, le=100)
moisture_penalty_per_pct: float | None = None
premiums: float = Field(0)
fx_rate: PositiveFloat = Field(1)
currency_code: str | None = Field(None, min_length=3, max_length=3)
opex: float = Field(0, ge=0)
sustaining_capex: float = Field(0, ge=0)
capex: float = Field(0, ge=0)
discount_rate: float | None = Field(None, ge=0, le=100)
periods: int = Field(10, ge=1, le=120)
impurities: List[ImpurityInput] = Field(default_factory=list)
@field_validator("currency_code")
@classmethod
def _uppercase_currency(cls, value: str | None) -> str | None:
if value is None:
return None
return value.strip().upper()
@field_validator("metal")
@classmethod
def _normalise_metal(cls, value: str) -> str:
return value.strip().lower()
class ProfitabilityCosts(BaseModel):
"""Aggregated cost components for profitability output."""
opex_total: float
sustaining_capex_total: float
capex: float
class ProfitabilityMetrics(BaseModel):
"""Financial KPIs yielded by the profitability calculation."""
npv: float | None
irr: float | None
payback_period: float | None
margin: float | None
class CashFlowEntry(BaseModel):
"""Normalized cash flow row for reporting and charting."""
period: int
revenue: float
opex: float
sustaining_capex: float
net: float
class ProfitabilityCalculationResult(BaseModel):
"""Response body summarizing profitability calculation outputs."""
pricing: PricingResult
costs: ProfitabilityCosts
metrics: ProfitabilityMetrics
cash_flows: list[CashFlowEntry]
currency: str | None
class CapexComponentInput(BaseModel):
"""Capex component entry supplied by the UI."""
id: int | None = Field(default=None, ge=1)
name: str = Field(..., min_length=1)
category: str = Field(..., min_length=1)
amount: float = Field(..., ge=0)
currency: str | None = Field(None, min_length=3, max_length=3)
spend_year: int | None = Field(None, ge=0, le=120)
notes: str | None = Field(None, max_length=500)
@field_validator("currency")
@classmethod
def _uppercase_currency(cls, value: str | None) -> str | None:
if value is None:
return None
return value.strip().upper()
@field_validator("category")
@classmethod
def _normalise_category(cls, value: str) -> str:
return value.strip().lower()
@field_validator("name")
@classmethod
def _trim_name(cls, value: str) -> str:
return value.strip()
class CapexParameters(BaseModel):
"""Global parameters applied to capex calculations."""
currency_code: str | None = Field(None, min_length=3, max_length=3)
contingency_pct: float | None = Field(0, ge=0, le=100)
discount_rate_pct: float | None = Field(None, ge=0, le=100)
evaluation_horizon_years: int | None = Field(10, ge=1, le=100)
@field_validator("currency_code")
@classmethod
def _uppercase_currency(cls, value: str | None) -> str | None:
if value is None:
return None
return value.strip().upper()
class CapexCalculationOptions(BaseModel):
"""Optional behaviour flags for capex calculations."""
persist: bool = False
class CapexCalculationRequest(BaseModel):
"""Request payload for capex aggregation."""
components: List[CapexComponentInput] = Field(default_factory=list)
parameters: CapexParameters = Field(
default_factory=CapexParameters, # type: ignore[arg-type]
)
options: CapexCalculationOptions = Field(
default_factory=CapexCalculationOptions, # type: ignore[arg-type]
)
class CapexCategoryBreakdown(BaseModel):
"""Breakdown entry describing category totals."""
category: str
amount: float = Field(..., ge=0)
share: float | None = Field(None, ge=0, le=100)
class CapexTotals(BaseModel):
"""Aggregated totals for capex workflows."""
overall: float = Field(..., ge=0)
contingency_pct: float = Field(0, ge=0, le=100)
contingency_amount: float = Field(..., ge=0)
with_contingency: float = Field(..., ge=0)
by_category: List[CapexCategoryBreakdown] = Field(default_factory=list)
class CapexTimelineEntry(BaseModel):
"""Spend profile entry grouped by year."""
year: int
spend: float = Field(..., ge=0)
cumulative: float = Field(..., ge=0)
class CapexCalculationResult(BaseModel):
"""Response body for capex calculations."""
totals: CapexTotals
timeline: List[CapexTimelineEntry] = Field(default_factory=list)
components: List[CapexComponentInput] = Field(default_factory=list)
parameters: CapexParameters
options: CapexCalculationOptions
currency: str | None
class OpexComponentInput(BaseModel):
"""opex component entry supplied by the UI."""
id: int | None = Field(default=None, ge=1)
name: str = Field(..., min_length=1)
category: str = Field(..., min_length=1)
unit_cost: float = Field(..., ge=0)
quantity: float = Field(..., ge=0)
frequency: str = Field(..., min_length=1)
currency: str | None = Field(None, min_length=3, max_length=3)
period_start: int | None = Field(None, ge=0, le=240)
period_end: int | None = Field(None, ge=0, le=240)
notes: str | None = Field(None, max_length=500)
@field_validator("currency")
@classmethod
def _uppercase_currency(cls, value: str | None) -> str | None:
if value is None:
return None
return value.strip().upper()
@field_validator("category")
@classmethod
def _normalise_category(cls, value: str) -> str:
return value.strip().lower()
@field_validator("frequency")
@classmethod
def _normalise_frequency(cls, value: str) -> str:
return value.strip().lower()
@field_validator("name")
@classmethod
def _trim_name(cls, value: str) -> str:
return value.strip()
class OpexParameters(BaseModel):
"""Global parameters applied to opex calculations."""
currency_code: str | None = Field(None, min_length=3, max_length=3)
escalation_pct: float | None = Field(None, ge=0, le=100)
discount_rate_pct: float | None = Field(None, ge=0, le=100)
evaluation_horizon_years: int | None = Field(10, ge=1, le=100)
apply_escalation: bool = True
@field_validator("currency_code")
@classmethod
def _uppercase_currency(cls, value: str | None) -> str | None:
if value is None:
return None
return value.strip().upper()
class OpexOptions(BaseModel):
"""Optional behaviour flags for opex calculations."""
persist: bool = False
snapshot_notes: str | None = Field(None, max_length=500)
class OpexCalculationRequest(BaseModel):
"""Request payload for opex aggregation."""
components: List[OpexComponentInput] = Field(
default_factory=list)
parameters: OpexParameters = Field(
default_factory=OpexParameters, # type: ignore[arg-type]
)
options: OpexOptions = Field(
default_factory=OpexOptions, # type: ignore[arg-type]
)
class OpexCategoryBreakdown(BaseModel):
"""Category breakdown for opex totals."""
category: str
annual_cost: float = Field(..., ge=0)
share: float | None = Field(None, ge=0, le=100)
class OpexTimelineEntry(BaseModel):
"""Timeline entry representing cost over evaluation periods."""
period: int
base_cost: float = Field(..., ge=0)
escalated_cost: float | None = Field(None, ge=0)
class OpexMetrics(BaseModel):
"""Derived KPIs for opex outputs."""
annual_average: float | None
cost_per_ton: float | None
class OpexTotals(BaseModel):
"""Aggregated totals for opex."""
overall_annual: float = Field(..., ge=0)
escalated_total: float | None = Field(None, ge=0)
escalation_pct: float | None = Field(None, ge=0, le=100)
by_category: List[OpexCategoryBreakdown] = Field(
default_factory=list
)
class OpexCalculationResult(BaseModel):
"""Response body summarising opex calculations."""
totals: OpexTotals
timeline: List[OpexTimelineEntry] = Field(default_factory=list)
metrics: OpexMetrics
components: List[OpexComponentInput] = Field(
default_factory=list)
parameters: OpexParameters
options: OpexOptions
currency: str | None
__all__ = [
"ImpurityInput",
"ProfitabilityCalculationRequest",
"ProfitabilityCosts",
"ProfitabilityMetrics",
"CashFlowEntry",
"ProfitabilityCalculationResult",
"CapexComponentInput",
"CapexParameters",
"CapexCalculationOptions",
"CapexCalculationRequest",
"CapexCategoryBreakdown",
"CapexTotals",
"CapexTimelineEntry",
"CapexCalculationResult",
"OpexComponentInput",
"OpexParameters",
"OpexOptions",
"OpexCalculationRequest",
"OpexCategoryBreakdown",
"OpexTimelineEntry",
"OpexMetrics",
"OpexTotals",
"OpexCalculationResult",
"ValidationError",
]

69
schemas/exports.py Normal file
View File

@@ -0,0 +1,69 @@
from __future__ import annotations
from enum import Enum
from typing import Literal
from pydantic import BaseModel, ConfigDict, field_validator
from services.export_query import ProjectExportFilters, ScenarioExportFilters
class ExportFormat(str, Enum):
CSV = "csv"
XLSX = "xlsx"
class BaseExportRequest(BaseModel):
format: ExportFormat = ExportFormat.CSV
include_metadata: bool = False
model_config = ConfigDict(extra="forbid")
class ProjectExportRequest(BaseExportRequest):
filters: ProjectExportFilters | None = None
@field_validator("filters", mode="before")
@classmethod
def validate_filters(cls, value: ProjectExportFilters | None) -> ProjectExportFilters | None:
if value is None:
return None
if isinstance(value, ProjectExportFilters):
return value
return ProjectExportFilters(**value)
class ScenarioExportRequest(BaseExportRequest):
filters: ScenarioExportFilters | None = None
@field_validator("filters", mode="before")
@classmethod
def validate_filters(cls, value: ScenarioExportFilters | None) -> ScenarioExportFilters | None:
if value is None:
return None
if isinstance(value, ScenarioExportFilters):
return value
return ScenarioExportFilters(**value)
class ExportTicket(BaseModel):
token: str
format: ExportFormat
resource: Literal["projects", "scenarios"]
model_config = ConfigDict(extra="forbid")
class ExportResponse(BaseModel):
ticket: ExportTicket
model_config = ConfigDict(extra="forbid")
__all__ = [
"ExportFormat",
"ProjectExportRequest",
"ScenarioExportRequest",
"ExportTicket",
"ExportResponse",
]

292
schemas/imports.py Normal file
View File

@@ -0,0 +1,292 @@
from __future__ import annotations
from datetime import date, datetime
from typing import Any, Mapping
from typing import Literal
from pydantic import BaseModel, ConfigDict, Field, field_validator, model_validator
from models import MiningOperationType, ResourceType, ScenarioStatus
from services.currency import CurrencyValidationError, normalise_currency
PreviewStateLiteral = Literal["new", "update", "skip", "error"]
def _normalise_string(value: Any) -> str:
if value is None:
return ""
if isinstance(value, str):
return value.strip()
return str(value).strip()
def _strip_or_none(value: Any | None) -> str | None:
if value is None:
return None
text = _normalise_string(value)
return text or None
def _coerce_enum(value: Any, enum_cls: Any, aliases: Mapping[str, Any]) -> Any:
if value is None:
return value
if isinstance(value, enum_cls):
return value
text = _normalise_string(value).lower()
if not text:
return None
if text in aliases:
return aliases[text]
try:
return enum_cls(text)
except ValueError as exc: # pragma: no cover - surfaced by Pydantic
raise ValueError(
f"Invalid value '{value}' for {enum_cls.__name__}") from exc
OPERATION_TYPE_ALIASES: dict[str, MiningOperationType] = {
"open pit": MiningOperationType.OPEN_PIT,
"openpit": MiningOperationType.OPEN_PIT,
"underground": MiningOperationType.UNDERGROUND,
"in-situ leach": MiningOperationType.IN_SITU_LEACH,
"in situ": MiningOperationType.IN_SITU_LEACH,
"placer": MiningOperationType.PLACER,
"quarry": MiningOperationType.QUARRY,
"mountaintop removal": MiningOperationType.MOUNTAINTOP_REMOVAL,
"other": MiningOperationType.OTHER,
}
SCENARIO_STATUS_ALIASES: dict[str, ScenarioStatus] = {
"draft": ScenarioStatus.DRAFT,
"active": ScenarioStatus.ACTIVE,
"archived": ScenarioStatus.ARCHIVED,
}
RESOURCE_TYPE_ALIASES: dict[str, ResourceType] = {
key.replace("_", " ").lower(): value for key, value in ResourceType.__members__.items()
}
RESOURCE_TYPE_ALIASES.update(
{value.value.replace("_", " ").lower(): value for value in ResourceType}
)
class ProjectImportRow(BaseModel):
name: str
location: str | None = None
operation_type: MiningOperationType
description: str | None = None
created_at: datetime | None = None
updated_at: datetime | None = None
model_config = ConfigDict(extra="forbid")
@field_validator("name", mode="before")
@classmethod
def validate_name(cls, value: Any) -> str:
text = _normalise_string(value)
if not text:
raise ValueError("Project name is required")
return text
@field_validator("location", "description", mode="before")
@classmethod
def optional_text(cls, value: Any | None) -> str | None:
return _strip_or_none(value)
@field_validator("operation_type", mode="before")
@classmethod
def map_operation_type(cls, value: Any) -> MiningOperationType | None:
return _coerce_enum(value, MiningOperationType, OPERATION_TYPE_ALIASES)
class ScenarioImportRow(BaseModel):
project_name: str
name: str
status: ScenarioStatus = ScenarioStatus.DRAFT
start_date: date | None = None
end_date: date | None = None
discount_rate: float | None = None
currency: str | None = None
primary_resource: ResourceType | None = None
description: str | None = None
created_at: datetime | None = None
updated_at: datetime | None = None
model_config = ConfigDict(extra="forbid")
@field_validator("project_name", "name", mode="before")
@classmethod
def validate_required_text(cls, value: Any, info) -> str:
text = _normalise_string(value)
if not text:
raise ValueError(
f"{info.field_name.replace('_', ' ').title()} is required")
return text
@field_validator("status", mode="before")
@classmethod
def map_status(cls, value: Any) -> ScenarioStatus | None:
return _coerce_enum(value, ScenarioStatus, SCENARIO_STATUS_ALIASES)
@field_validator("primary_resource", mode="before")
@classmethod
def map_resource(cls, value: Any) -> ResourceType | None:
return _coerce_enum(value, ResourceType, RESOURCE_TYPE_ALIASES)
@field_validator("description", mode="before")
@classmethod
def optional_description(cls, value: Any | None) -> str | None:
return _strip_or_none(value)
@field_validator("currency", mode="before")
@classmethod
def normalise_currency(cls, value: Any | None) -> str | None:
text = _strip_or_none(value)
if text is None:
return None
try:
return normalise_currency(text)
except CurrencyValidationError as exc:
raise ValueError(str(exc)) from exc
@field_validator("discount_rate", mode="before")
@classmethod
def coerce_discount_rate(cls, value: Any | None) -> float | None:
if value is None:
return None
if isinstance(value, (int, float)):
return float(value)
text = _normalise_string(value)
if not text:
return None
if text.endswith("%"):
text = text[:-1]
try:
return float(text)
except ValueError as exc:
raise ValueError("Discount rate must be numeric") from exc
@model_validator(mode="after")
def validate_dates(self) -> "ScenarioImportRow":
if self.start_date and self.end_date and self.start_date > self.end_date:
raise ValueError("End date must be on or after start date")
return self
class ImportRowErrorModel(BaseModel):
row_number: int
field: str | None = None
message: str
model_config = ConfigDict(from_attributes=True, extra="forbid")
class ImportPreviewRowIssueModel(BaseModel):
message: str
field: str | None = None
model_config = ConfigDict(from_attributes=True, extra="forbid")
class ImportPreviewRowIssuesModel(BaseModel):
row_number: int
state: PreviewStateLiteral | None = None
issues: list[ImportPreviewRowIssueModel] = Field(default_factory=list)
model_config = ConfigDict(from_attributes=True, extra="forbid")
class ImportPreviewSummaryModel(BaseModel):
total_rows: int
accepted: int
skipped: int
errored: int
model_config = ConfigDict(from_attributes=True, extra="forbid")
class ProjectImportPreviewRow(BaseModel):
row_number: int
data: ProjectImportRow
state: PreviewStateLiteral
issues: list[str] = Field(default_factory=list)
context: dict[str, Any] | None = None
model_config = ConfigDict(from_attributes=True, extra="forbid")
class ScenarioImportPreviewRow(BaseModel):
row_number: int
data: ScenarioImportRow
state: PreviewStateLiteral
issues: list[str] = Field(default_factory=list)
context: dict[str, Any] | None = None
model_config = ConfigDict(from_attributes=True, extra="forbid")
class ProjectImportPreviewResponse(BaseModel):
rows: list[ProjectImportPreviewRow]
summary: ImportPreviewSummaryModel
row_issues: list[ImportPreviewRowIssuesModel] = Field(default_factory=list)
parser_errors: list[ImportRowErrorModel] = Field(default_factory=list)
stage_token: str | None = None
model_config = ConfigDict(from_attributes=True, extra="forbid")
class ScenarioImportPreviewResponse(BaseModel):
rows: list[ScenarioImportPreviewRow]
summary: ImportPreviewSummaryModel
row_issues: list[ImportPreviewRowIssuesModel] = Field(default_factory=list)
parser_errors: list[ImportRowErrorModel] = Field(default_factory=list)
stage_token: str | None = None
model_config = ConfigDict(from_attributes=True, extra="forbid")
class ImportCommitSummaryModel(BaseModel):
created: int
updated: int
model_config = ConfigDict(from_attributes=True, extra="forbid")
class ProjectImportCommitRow(BaseModel):
row_number: int
data: ProjectImportRow
context: dict[str, Any]
model_config = ConfigDict(from_attributes=True, extra="forbid")
class ScenarioImportCommitRow(BaseModel):
row_number: int
data: ScenarioImportRow
context: dict[str, Any]
model_config = ConfigDict(from_attributes=True, extra="forbid")
class ProjectImportCommitResponse(BaseModel):
token: str
rows: list[ProjectImportCommitRow]
summary: ImportCommitSummaryModel
model_config = ConfigDict(from_attributes=True, extra="forbid")
class ScenarioImportCommitResponse(BaseModel):
token: str
rows: list[ScenarioImportCommitRow]
summary: ImportCommitSummaryModel
model_config = ConfigDict(from_attributes=True, extra="forbid")
class ImportCommitRequest(BaseModel):
token: str
model_config = ConfigDict(extra="forbid")

36
schemas/navigation.py Normal file
View File

@@ -0,0 +1,36 @@
from __future__ import annotations
from datetime import datetime
from typing import List
from pydantic import BaseModel, Field
class NavigationLinkSchema(BaseModel):
id: int
label: str
href: str
match_prefix: str | None = Field(default=None)
icon: str | None = Field(default=None)
tooltip: str | None = Field(default=None)
is_external: bool = Field(default=False)
children: List["NavigationLinkSchema"] = Field(default_factory=list)
class NavigationGroupSchema(BaseModel):
id: int
label: str
icon: str | None = Field(default=None)
tooltip: str | None = Field(default=None)
links: List[NavigationLinkSchema] = Field(default_factory=list)
class NavigationSidebarResponse(BaseModel):
groups: List[NavigationGroupSchema]
roles: List[str] = Field(default_factory=list)
generated_at: datetime
NavigationLinkSchema.model_rebuild()
NavigationGroupSchema.model_rebuild()
NavigationSidebarResponse.model_rebuild()

37
schemas/project.py Normal file
View File

@@ -0,0 +1,37 @@
from __future__ import annotations
from datetime import datetime
from pydantic import BaseModel, ConfigDict
from models import MiningOperationType
class ProjectBase(BaseModel):
name: str
location: str | None = None
operation_type: MiningOperationType
description: str | None = None
model_config = ConfigDict(extra="forbid")
class ProjectCreate(ProjectBase):
pass
class ProjectUpdate(BaseModel):
name: str | None = None
location: str | None = None
operation_type: MiningOperationType | None = None
description: str | None = None
model_config = ConfigDict(extra="forbid")
class ProjectRead(ProjectBase):
id: int
created_at: datetime
updated_at: datetime
model_config = ConfigDict(from_attributes=True)

97
schemas/scenario.py Normal file
View File

@@ -0,0 +1,97 @@
from __future__ import annotations
from datetime import date, datetime
from pydantic import BaseModel, ConfigDict, field_validator, model_validator
from models import ResourceType, ScenarioStatus
from services.currency import CurrencyValidationError, normalise_currency
class ScenarioBase(BaseModel):
name: str
description: str | None = None
status: ScenarioStatus = ScenarioStatus.DRAFT
start_date: date | None = None
end_date: date | None = None
discount_rate: float | None = None
currency: str | None = None
primary_resource: ResourceType | None = None
model_config = ConfigDict(extra="forbid")
@field_validator("currency")
@classmethod
def normalise_currency(cls, value: str | None) -> str | None:
if value is None:
return None
candidate = value if isinstance(value, str) else str(value)
candidate = candidate.strip()
if not candidate:
return None
try:
return normalise_currency(candidate)
except CurrencyValidationError as exc:
raise ValueError(str(exc)) from exc
class ScenarioCreate(ScenarioBase):
pass
class ScenarioUpdate(BaseModel):
name: str | None = None
description: str | None = None
status: ScenarioStatus | None = None
start_date: date | None = None
end_date: date | None = None
discount_rate: float | None = None
currency: str | None = None
primary_resource: ResourceType | None = None
model_config = ConfigDict(extra="forbid")
@field_validator("currency")
@classmethod
def normalise_currency(cls, value: str | None) -> str | None:
if value is None:
return None
candidate = value if isinstance(value, str) else str(value)
candidate = candidate.strip()
if not candidate:
return None
try:
return normalise_currency(candidate)
except CurrencyValidationError as exc:
raise ValueError(str(exc)) from exc
class ScenarioRead(ScenarioBase):
id: int
project_id: int
created_at: datetime
updated_at: datetime
model_config = ConfigDict(from_attributes=True)
class ScenarioComparisonRequest(BaseModel):
scenario_ids: list[int]
model_config = ConfigDict(extra="forbid")
@model_validator(mode="after")
def ensure_minimum_ids(self) -> "ScenarioComparisonRequest":
unique_ids: list[int] = list(dict.fromkeys(self.scenario_ids))
if len(unique_ids) < 2:
raise ValueError(
"At least two unique scenario identifiers are required for comparison.")
self.scenario_ids = unique_ids
return self
class ScenarioComparisonResponse(BaseModel):
project_id: int
scenarios: list[ScenarioRead]
model_config = ConfigDict(from_attributes=True)

View File

@@ -0,0 +1,20 @@
from __future__ import annotations
import logging
from scripts.initial_data import load_config, seed_initial_data
def main() -> int:
logging.basicConfig(level=logging.INFO, format="[%(levelname)s] %(message)s")
try:
config = load_config()
seed_initial_data(config)
except Exception as exc: # pragma: no cover - operational guard
logging.exception("Seeding failed: %s", exc)
return 1
return 0
if __name__ == "__main__":
raise SystemExit(main())

1
scripts/__init__.py Normal file
View File

@@ -0,0 +1 @@
"""Utility scripts for CalMiner maintenance tasks."""

View File

@@ -0,0 +1,112 @@
"""Utility script to verify key authenticated routes respond without errors."""
from __future__ import annotations
import json
import os
import sys
import urllib.parse
from http.client import HTTPConnection
from http.cookies import SimpleCookie
from typing import Dict, List, Tuple
HOST = "127.0.0.1"
PORT = 8000
cookies: Dict[str, str] = {}
def _update_cookies(headers: List[Tuple[str, str]]) -> None:
for name, value in headers:
if name.lower() != "set-cookie":
continue
cookie = SimpleCookie()
cookie.load(value)
for key, morsel in cookie.items():
cookies[key] = morsel.value
def _cookie_header() -> str | None:
if not cookies:
return None
return "; ".join(f"{key}={value}" for key, value in cookies.items())
def request(method: str, path: str, *, body: bytes | None = None, headers: Dict[str, str] | None = None) -> Tuple[int, Dict[str, str], bytes]:
conn = HTTPConnection(HOST, PORT, timeout=10)
prepared_headers = {"User-Agent": "route-checker"}
if headers:
prepared_headers.update(headers)
cookie_header = _cookie_header()
if cookie_header:
prepared_headers["Cookie"] = cookie_header
conn.request(method, path, body=body, headers=prepared_headers)
resp = conn.getresponse()
payload = resp.read()
status = resp.status
reason = resp.reason
response_headers = {name: value for name, value in resp.getheaders()}
_update_cookies(list(resp.getheaders()))
conn.close()
print(f"{method} {path} -> {status} {reason}")
return status, response_headers, payload
def main() -> int:
status, _, _ = request("GET", "/login")
if status != 200:
print("Unexpected status for GET /login", file=sys.stderr)
return 1
admin_username = os.getenv("CALMINER_SEED_ADMIN_USERNAME", "admin")
admin_password = os.getenv("CALMINER_SEED_ADMIN_PASSWORD", "M11ffpgm.")
login_payload = urllib.parse.urlencode(
{"username": admin_username, "password": admin_password}
).encode()
status, headers, _ = request(
"POST",
"/login",
body=login_payload,
headers={"Content-Type": "application/x-www-form-urlencoded"},
)
if status not in {200, 303}:
print("Login failed", file=sys.stderr)
return 1
location = headers.get("Location", "/")
redirect_path = urllib.parse.urlsplit(location).path or "/"
request("GET", redirect_path)
request("GET", "/")
request("GET", "/projects/ui")
status, headers, body = request(
"GET",
"/projects",
headers={"Accept": "application/json"},
)
projects: List[dict] = []
if headers.get("Content-Type", "").startswith("application/json"):
projects = json.loads(body.decode())
if projects:
project_id = projects[0]["id"]
request("GET", f"/projects/{project_id}/view")
status, headers, body = request(
"GET",
f"/projects/{project_id}/scenarios",
headers={"Accept": "application/json"},
)
scenarios: List[dict] = []
if headers.get("Content-Type", "").startswith("application/json"):
scenarios = json.loads(body.decode())
if scenarios:
scenario_id = scenarios[0]["id"]
request("GET", f"/scenarios/{scenario_id}/view")
print("Cookies:", cookies)
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,15 @@
from sqlalchemy import create_engine, text
from config.database import DATABASE_URL
engine = create_engine(DATABASE_URL, future=True)
sqls = [
"CREATE SEQUENCE IF NOT EXISTS users_id_seq;",
"ALTER TABLE users ALTER COLUMN id SET DEFAULT nextval('users_id_seq');",
"SELECT setval('users_id_seq', COALESCE((SELECT MAX(id) FROM users), 1));",
"ALTER SEQUENCE users_id_seq OWNED BY users.id;",
]
with engine.begin() as conn:
for s in sqls:
print('EXECUTING:', s)
conn.execute(text(s))
print('SEQUENCE fix applied')

1468
scripts/init_db.py Normal file

File diff suppressed because it is too large Load Diff

231
scripts/initial_data.py Normal file
View File

@@ -0,0 +1,231 @@
from __future__ import annotations
import logging
import os
from dataclasses import dataclass
from typing import Callable, Iterable
from dotenv import load_dotenv
from config.settings import Settings
from models import Role, User
from services.repositories import (
DEFAULT_ROLE_DEFINITIONS,
PricingSettingsSeedResult,
RoleRepository,
UserRepository,
ensure_default_pricing_settings,
)
from services.unit_of_work import UnitOfWork
@dataclass
class SeedConfig:
admin_email: str
admin_username: str
admin_password: str
admin_roles: tuple[str, ...]
force_reset: bool
@dataclass
class RoleSeedResult:
created: int
updated: int
total: int
@dataclass
class AdminSeedResult:
created_user: bool
updated_user: bool
password_rotated: bool
roles_granted: int
def parse_bool(value: str | None) -> bool:
if value is None:
return False
return value.strip().lower() in {"1", "true", "yes", "on"}
def normalise_role_list(raw_value: str | None) -> tuple[str, ...]:
if not raw_value:
return ("admin",)
parts = [segment.strip()
for segment in raw_value.split(",") if segment.strip()]
if "admin" not in parts:
parts.insert(0, "admin")
seen: set[str] = set()
ordered: list[str] = []
for role_name in parts:
if role_name not in seen:
ordered.append(role_name)
seen.add(role_name)
return tuple(ordered)
def load_config() -> SeedConfig:
load_dotenv()
admin_email = os.getenv("CALMINER_SEED_ADMIN_EMAIL",
"admin@calminer.local")
admin_username = os.getenv("CALMINER_SEED_ADMIN_USERNAME", "admin")
admin_password = os.getenv("CALMINER_SEED_ADMIN_PASSWORD", "ChangeMe123!")
admin_roles = normalise_role_list(os.getenv("CALMINER_SEED_ADMIN_ROLES"))
force_reset = parse_bool(os.getenv("CALMINER_SEED_FORCE"))
return SeedConfig(
admin_email=admin_email,
admin_username=admin_username,
admin_password=admin_password,
admin_roles=admin_roles,
force_reset=force_reset,
)
def ensure_default_roles(
role_repo: RoleRepository,
definitions: Iterable[dict[str, str]] = DEFAULT_ROLE_DEFINITIONS,
) -> RoleSeedResult:
created = 0
updated = 0
total = 0
for definition in definitions:
total += 1
existing = role_repo.get_by_name(definition["name"])
if existing is None:
role_repo.create(Role(**definition))
created += 1
continue
changed = False
if existing.display_name != definition["display_name"]:
existing.display_name = definition["display_name"]
changed = True
if existing.description != definition["description"]:
existing.description = definition["description"]
changed = True
if changed:
updated += 1
role_repo.session.flush()
return RoleSeedResult(created=created, updated=updated, total=total)
def ensure_admin_user(
user_repo: UserRepository,
role_repo: RoleRepository,
config: SeedConfig,
) -> AdminSeedResult:
created_user = False
updated_user = False
password_rotated = False
roles_granted = 0
user = user_repo.get_by_email(config.admin_email, with_roles=True)
if user is None:
user = User(
email=config.admin_email,
username=config.admin_username,
password_hash=User.hash_password(config.admin_password),
is_active=True,
is_superuser=True,
)
user_repo.create(user)
created_user = True
else:
if user.username != config.admin_username:
user.username = config.admin_username
updated_user = True
if not user.is_active:
user.is_active = True
updated_user = True
if not user.is_superuser:
user.is_superuser = True
updated_user = True
if config.force_reset:
user.password_hash = User.hash_password(config.admin_password)
password_rotated = True
updated_user = True
user_repo.session.flush()
for role_name in config.admin_roles:
role = role_repo.get_by_name(role_name)
if role is None:
logging.warning(
"Role '%s' is not defined and will be skipped", role_name)
continue
already_assigned = any(assignment.role_id ==
role.id for assignment in user.role_assignments)
if already_assigned:
continue
user_repo.assign_role(
user_id=user.id, role_id=role.id, granted_by=user.id)
roles_granted += 1
return AdminSeedResult(
created_user=created_user,
updated_user=updated_user,
password_rotated=password_rotated,
roles_granted=roles_granted,
)
def seed_initial_data(
config: SeedConfig,
*,
unit_of_work_factory: Callable[[], UnitOfWork] | None = None,
) -> None:
logging.info("Starting initial data seeding")
factory = unit_of_work_factory or UnitOfWork
with factory() as uow:
assert (
uow.roles is not None
and uow.users is not None
and uow.pricing_settings is not None
and uow.projects is not None
)
role_result = ensure_default_roles(uow.roles)
admin_result = ensure_admin_user(uow.users, uow.roles, config)
pricing_metadata = uow.get_pricing_metadata()
metadata_source = "database"
if pricing_metadata is None:
pricing_metadata = Settings.from_environment().pricing_metadata()
metadata_source = "environment"
pricing_result: PricingSettingsSeedResult = ensure_default_pricing_settings(
uow.pricing_settings,
metadata=pricing_metadata,
)
projects_without_pricing = [
project
for project in uow.projects.list(with_pricing=True)
if project.pricing_settings is None
]
assigned_projects = 0
for project in projects_without_pricing:
uow.set_project_pricing_settings(project, pricing_result.settings)
assigned_projects += 1
logging.info(
"Roles processed: %s total, %s created, %s updated",
role_result.total,
role_result.created,
role_result.updated,
)
logging.info(
"Admin user: created=%s updated=%s password_rotated=%s roles_granted=%s",
admin_result.created_user,
admin_result.updated_user,
admin_result.password_rotated,
admin_result.roles_granted,
)
logging.info(
"Pricing settings ensured (source=%s): slug=%s created=%s updated_fields=%s impurity_upserts=%s",
metadata_source,
pricing_result.settings.slug,
pricing_result.created,
pricing_result.updated_fields,
pricing_result.impurity_upserts,
)
logging.info(
"Projects updated with default pricing settings: %s",
assigned_projects,
)
logging.info("Initial data seeding completed successfully")

91
scripts/reset_db.py Normal file
View File

@@ -0,0 +1,91 @@
"""Utility to reset development Postgres schema artifacts.
This script drops managed tables and enum types created by `scripts.init_db`.
It is intended for local development only; it refuses to run if CALMINER_ENV
indicates production or staging. The operation is idempotent: missing objects
are ignored. Use with caution.
"""
from __future__ import annotations
import logging
import os
from dataclasses import dataclass
from typing import Iterable
from sqlalchemy import text
from sqlalchemy.engine import Engine
from config.database import DATABASE_URL
from scripts.init_db import ENUM_DEFINITIONS, _create_engine
logger = logging.getLogger(__name__)
@dataclass(slots=True)
class ResetOptions:
drop_tables: bool = True
drop_enums: bool = True
MANAGED_TABLES: tuple[str, ...] = (
"simulation_parameters",
"financial_inputs",
"scenarios",
"projects",
"pricing_impurity_settings",
"pricing_metal_settings",
"pricing_settings",
"user_roles",
"users",
"roles",
)
FORBIDDEN_ENVIRONMENTS: set[str] = {"production", "staging", "prod", "stage"}
def _ensure_safe_environment() -> None:
env = os.getenv("CALMINER_ENV", "development").lower()
if env in FORBIDDEN_ENVIRONMENTS:
raise RuntimeError(
f"Refusing to reset database in environment '{env}'. "
"Set CALMINER_ENV to 'development' to proceed."
)
def _drop_tables(engine: Engine, tables: Iterable[str]) -> None:
if not tables:
return
with engine.begin() as conn:
for table in tables:
logger.info("Dropping table if exists: %s", table)
conn.execute(text(f"DROP TABLE IF EXISTS {table} CASCADE"))
def _drop_enums(engine: Engine, enum_names: Iterable[str]) -> None:
if not enum_names:
return
with engine.begin() as conn:
for enum_name in enum_names:
logger.info("Dropping enum type if exists: %s", enum_name)
conn.execute(text(f"DROP TYPE IF EXISTS {enum_name} CASCADE"))
def reset_database(*, options: ResetOptions | None = None, database_url: str | None = None) -> None:
"""Drop managed tables and enums for a clean slate."""
_ensure_safe_environment()
opts = options or ResetOptions()
engine = _create_engine(database_url or DATABASE_URL)
if opts.drop_tables:
_drop_tables(engine, MANAGED_TABLES)
if opts.drop_enums:
_drop_enums(engine, ENUM_DEFINITIONS.keys())
logger.info("Database reset complete")
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
reset_database()

86
scripts/verify_db.py Normal file
View File

@@ -0,0 +1,86 @@
"""Verify DB initialization results: enums, roles, admin user, pricing_settings."""
from __future__ import annotations
import logging
from sqlalchemy import create_engine, text
from config.database import DATABASE_URL
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
ENUMS = [
'miningoperationtype',
'scenariostatus',
'financialcategory',
'costbucket',
'distributiontype',
'stochasticvariable',
'resourcetype',
]
SQL_CHECK_ENUM = "SELECT typname FROM pg_type WHERE typname = ANY(:names)"
SQL_ROLES = "SELECT id, name, display_name FROM roles ORDER BY id"
SQL_ADMIN = "SELECT id, email, username, is_active, is_superuser FROM users WHERE id = 1"
SQL_USER_ROLES = "SELECT user_id, role_id, granted_by FROM user_roles WHERE user_id = 1"
SQL_PRICING = "SELECT id, slug, name, default_currency FROM pricing_settings WHERE slug = 'default'"
def run():
engine = create_engine(DATABASE_URL, future=True)
with engine.connect() as conn:
print('Using DATABASE_URL:', DATABASE_URL)
# enums
res = conn.execute(text(SQL_CHECK_ENUM), dict(names=ENUMS)).fetchall()
found = [r[0] for r in res]
print('\nEnums found:')
for name in ENUMS:
print(f' {name}:', 'YES' if name in found else 'NO')
# roles
try:
roles = conn.execute(text(SQL_ROLES)).fetchall()
print('\nRoles:')
if roles:
for r in roles:
print(f' id={r.id} name={r.name} display_name={r.display_name}')
else:
print(' (no roles found)')
except Exception as e:
print('\nRoles query failed:', e)
# admin user
try:
admin = conn.execute(text(SQL_ADMIN)).fetchone()
print('\nAdmin user:')
if admin:
print(f' id={admin.id} email={admin.email} username={admin.username} is_active={admin.is_active} is_superuser={admin.is_superuser}')
else:
print(' (admin user not found)')
except Exception as e:
print('\nAdmin query failed:', e)
# user_roles
try:
ur = conn.execute(text(SQL_USER_ROLES)).fetchall()
print('\nUser roles for user_id=1:')
if ur:
for row in ur:
print(f' user_id={row.user_id} role_id={row.role_id} granted_by={row.granted_by}')
else:
print(' (no user_roles rows for user_id=1)')
except Exception as e:
print('\nUser_roles query failed:', e)
# pricing settings
try:
p = conn.execute(text(SQL_PRICING)).fetchone()
print('\nPricing settings (slug=default):')
if p:
print(f' id={p.id} slug={p.slug} name={p.name} default_currency={p.default_currency}')
else:
print(' (default pricing settings not found)')
except Exception as e:
print('\nPricing query failed:', e)
if __name__ == '__main__':
run()

12
services/__init__.py Normal file
View File

@@ -0,0 +1,12 @@
"""Service layer utilities."""
from .pricing import calculate_pricing, PricingInput, PricingMetadata, PricingResult
from .calculations import calculate_profitability
__all__ = [
"calculate_pricing",
"PricingInput",
"PricingMetadata",
"PricingResult",
"calculate_profitability",
]

104
services/authorization.py Normal file
View File

@@ -0,0 +1,104 @@
from __future__ import annotations
from typing import Iterable
from models import Project, Role, Scenario, User
from services.exceptions import AuthorizationError, EntityNotFoundError
from services.repositories import ProjectRepository, ScenarioRepository
from services.unit_of_work import UnitOfWork
READ_ROLES: frozenset[str] = frozenset(
{"viewer", "analyst", "project_manager", "admin"}
)
MANAGE_ROLES: frozenset[str] = frozenset({"project_manager", "admin"})
def _user_role_names(user: User) -> set[str]:
roles: Iterable[Role] = getattr(user, "roles", []) or []
return {role.name for role in roles}
def _require_project_repo(uow: UnitOfWork) -> ProjectRepository:
if not uow.projects:
raise RuntimeError("Project repository not initialised")
return uow.projects
def _require_scenario_repo(uow: UnitOfWork) -> ScenarioRepository:
if not uow.scenarios:
raise RuntimeError("Scenario repository not initialised")
return uow.scenarios
def _assert_user_can_access(user: User, *, require_manage: bool) -> None:
if not user.is_active:
raise AuthorizationError("User account is disabled.")
if user.is_superuser:
return
allowed = MANAGE_ROLES if require_manage else READ_ROLES
if not _user_role_names(user) & allowed:
raise AuthorizationError(
"Insufficient role permissions for this action.")
def ensure_project_access(
uow: UnitOfWork,
*,
project_id: int,
user: User,
require_manage: bool = False,
) -> Project:
"""Resolve a project and ensure the user holds the required permissions."""
repo = _require_project_repo(uow)
project = repo.get(project_id)
_assert_user_can_access(user, require_manage=require_manage)
return project
def ensure_scenario_access(
uow: UnitOfWork,
*,
scenario_id: int,
user: User,
require_manage: bool = False,
with_children: bool = False,
) -> Scenario:
"""Resolve a scenario and ensure the user holds the required permissions."""
repo = _require_scenario_repo(uow)
scenario = repo.get(scenario_id, with_children=with_children)
_assert_user_can_access(user, require_manage=require_manage)
return scenario
def ensure_scenario_in_project(
uow: UnitOfWork,
*,
project_id: int,
scenario_id: int,
user: User,
require_manage: bool = False,
with_children: bool = False,
) -> Scenario:
"""Resolve a scenario ensuring it belongs to the project and the user may access it."""
project = ensure_project_access(
uow,
project_id=project_id,
user=user,
require_manage=require_manage,
)
scenario = ensure_scenario_access(
uow,
scenario_id=scenario_id,
user=user,
require_manage=require_manage,
with_children=with_children,
)
if scenario.project_id != project.id:
raise EntityNotFoundError(
f"Scenario {scenario_id} does not belong to project {project_id}."
)
return scenario

182
services/bootstrap.py Normal file
View File

@@ -0,0 +1,182 @@
from __future__ import annotations
import logging
from dataclasses import dataclass
from typing import Callable
from config.settings import AdminBootstrapSettings
from models import User
from services.pricing import PricingMetadata
from services.repositories import (
PricingSettingsSeedResult,
ensure_default_roles,
)
from services.unit_of_work import UnitOfWork
logger = logging.getLogger(__name__)
@dataclass(slots=True)
class RoleBootstrapResult:
created: int
ensured: int
@dataclass(slots=True)
class AdminBootstrapResult:
created_user: bool
updated_user: bool
password_rotated: bool
roles_granted: int
@dataclass(slots=True)
class PricingBootstrapResult:
seed: PricingSettingsSeedResult
projects_assigned: int
def bootstrap_admin(
*,
settings: AdminBootstrapSettings,
unit_of_work_factory: Callable[[], UnitOfWork] = UnitOfWork,
) -> tuple[RoleBootstrapResult, AdminBootstrapResult]:
"""Ensure default roles and administrator account exist."""
with unit_of_work_factory() as uow:
assert uow.roles is not None and uow.users is not None
role_result = _bootstrap_roles(uow)
admin_result = _bootstrap_admin_user(uow, settings)
logger.info(
"Admin bootstrap result: created_user=%s updated_user=%s password_rotated=%s roles_granted=%s",
admin_result.created_user,
admin_result.updated_user,
admin_result.password_rotated,
admin_result.roles_granted,
)
return role_result, admin_result
def _bootstrap_roles(uow: UnitOfWork) -> RoleBootstrapResult:
assert uow.roles is not None
before = {role.name for role in uow.roles.list()}
ensure_default_roles(uow.roles)
after = {role.name for role in uow.roles.list()}
created = len(after - before)
return RoleBootstrapResult(created=created, ensured=len(after))
def _bootstrap_admin_user(
uow: UnitOfWork,
settings: AdminBootstrapSettings,
) -> AdminBootstrapResult:
assert uow.users is not None and uow.roles is not None
created_user = False
updated_user = False
password_rotated = False
roles_granted = 0
user = uow.users.get_by_email(settings.email, with_roles=True)
if user is None:
user = User(
email=settings.email,
username=settings.username,
password_hash=User.hash_password(settings.password),
is_active=True,
is_superuser=True,
)
uow.users.create(user)
created_user = True
else:
if user.username != settings.username:
user.username = settings.username
updated_user = True
if not user.is_active:
user.is_active = True
updated_user = True
if not user.is_superuser:
user.is_superuser = True
updated_user = True
if settings.force_reset:
user.password_hash = User.hash_password(settings.password)
password_rotated = True
updated_user = True
uow.users.session.flush()
user = uow.users.get(user.id, with_roles=True)
assert user is not None
existing_roles = {role.name for role in user.roles}
for role_name in settings.roles:
role = uow.roles.get_by_name(role_name)
if role is None:
logger.warning(
"Bootstrap admin role '%s' is not defined; skipping assignment",
role_name,
)
continue
if role.name in existing_roles:
continue
uow.users.assign_role(
user_id=user.id,
role_id=role.id,
granted_by=user.id,
)
roles_granted += 1
existing_roles.add(role.name)
uow.users.session.flush()
return AdminBootstrapResult(
created_user=created_user,
updated_user=updated_user,
password_rotated=password_rotated,
roles_granted=roles_granted,
)
def bootstrap_pricing_settings(
*,
metadata: PricingMetadata,
unit_of_work_factory: Callable[[], UnitOfWork] = UnitOfWork,
default_slug: str = "default",
) -> PricingBootstrapResult:
"""Ensure baseline pricing settings exist and projects reference them."""
with unit_of_work_factory() as uow:
seed_result = uow.ensure_default_pricing_settings(
metadata=metadata,
slug=default_slug,
)
assigned = 0
if uow.projects:
default_settings = seed_result.settings
projects = uow.projects.list(with_pricing=True)
for project in projects:
if project.pricing_settings is None:
uow.set_project_pricing_settings(project, default_settings)
assigned += 1
# Capture logging-safe primitives while the UnitOfWork (and session)
# are still active to avoid DetachedInstanceError when accessing ORM
# instances outside the session scope.
seed_slug = seed_result.settings.slug if seed_result and seed_result.settings else None
seed_created = getattr(seed_result, "created", None)
seed_updated_fields = getattr(seed_result, "updated_fields", None)
seed_impurity_upserts = getattr(seed_result, "impurity_upserts", None)
logger.info(
"Pricing bootstrap result: slug=%s created=%s updated_fields=%s impurity_upserts=%s projects_assigned=%s",
seed_slug,
seed_created,
seed_updated_fields,
seed_impurity_upserts,
assigned,
)
return PricingBootstrapResult(seed=seed_result, projects_assigned=assigned)

535
services/calculations.py Normal file
View File

@@ -0,0 +1,535 @@
"""Service functions for financial calculations."""
from __future__ import annotations
from collections import defaultdict
from statistics import fmean
from services.currency import CurrencyValidationError, normalise_currency
from services.exceptions import (
CapexValidationError,
OpexValidationError,
ProfitabilityValidationError,
)
from services.financial import (
CashFlow,
ConvergenceError,
PaybackNotReachedError,
internal_rate_of_return,
net_present_value,
payback_period,
)
from services.pricing import PricingInput, PricingMetadata, PricingResult, calculate_pricing
from schemas.calculations import (
CapexCalculationRequest,
CapexCalculationResult,
CapexCategoryBreakdown,
CapexComponentInput,
CapexTotals,
CapexTimelineEntry,
CashFlowEntry,
OpexCalculationRequest,
OpexCalculationResult,
OpexCategoryBreakdown,
OpexComponentInput,
OpexMetrics,
OpexParameters,
OpexTotals,
OpexTimelineEntry,
ProfitabilityCalculationRequest,
ProfitabilityCalculationResult,
ProfitabilityCosts,
ProfitabilityMetrics,
)
_FREQUENCY_MULTIPLIER = {
"daily": 365,
"weekly": 52,
"monthly": 12,
"quarterly": 4,
"annually": 1,
}
def _build_pricing_input(
request: ProfitabilityCalculationRequest,
) -> PricingInput:
"""Construct a pricing input instance including impurity overrides."""
impurity_values: dict[str, float] = {}
impurity_thresholds: dict[str, float] = {}
impurity_penalties: dict[str, float] = {}
for impurity in request.impurities:
code = impurity.name.strip()
if not code:
continue
code = code.upper()
if impurity.value is not None:
impurity_values[code] = float(impurity.value)
if impurity.threshold is not None:
impurity_thresholds[code] = float(impurity.threshold)
if impurity.penalty is not None:
impurity_penalties[code] = float(impurity.penalty)
pricing_input = PricingInput(
metal=request.metal,
ore_tonnage=request.ore_tonnage,
head_grade_pct=request.head_grade_pct,
recovery_pct=request.recovery_pct,
payable_pct=request.payable_pct,
reference_price=request.reference_price,
treatment_charge=request.treatment_charge,
smelting_charge=request.smelting_charge,
moisture_pct=request.moisture_pct,
moisture_threshold_pct=request.moisture_threshold_pct,
moisture_penalty_per_pct=request.moisture_penalty_per_pct,
impurity_ppm=impurity_values,
impurity_thresholds=impurity_thresholds,
impurity_penalty_per_ppm=impurity_penalties,
premiums=request.premiums,
fx_rate=request.fx_rate,
currency_code=request.currency_code,
)
return pricing_input
def _generate_cash_flows(
*,
periods: int,
net_per_period: float,
capex: float,
) -> tuple[list[CashFlow], list[CashFlowEntry]]:
"""Create cash flow structures for financial metric calculations."""
cash_flow_models: list[CashFlow] = [
CashFlow(amount=-capex, period_index=0)
]
cash_flow_entries: list[CashFlowEntry] = [
CashFlowEntry(
period=0,
revenue=0.0,
opex=0.0,
sustaining_capex=0.0,
net=-capex,
)
]
for period in range(1, periods + 1):
cash_flow_models.append(
CashFlow(amount=net_per_period, period_index=period))
cash_flow_entries.append(
CashFlowEntry(
period=period,
revenue=0.0,
opex=0.0,
sustaining_capex=0.0,
net=net_per_period,
)
)
return cash_flow_models, cash_flow_entries
def calculate_profitability(
request: ProfitabilityCalculationRequest,
*,
metadata: PricingMetadata,
) -> ProfitabilityCalculationResult:
"""Calculate profitability metrics using pricing inputs and cost data."""
if request.periods <= 0:
raise ProfitabilityValidationError(
"Evaluation periods must be at least 1.", ["periods"]
)
pricing_input = _build_pricing_input(request)
try:
pricing_result: PricingResult = calculate_pricing(
pricing_input, metadata=metadata
)
except CurrencyValidationError as exc:
raise ProfitabilityValidationError(
str(exc), ["currency_code"]) from exc
periods = request.periods
revenue_total = float(pricing_result.net_revenue)
revenue_per_period = revenue_total / periods
processing_total = float(request.opex) * periods
sustaining_total = float(request.sustaining_capex) * periods
capex = float(request.capex)
net_per_period = (
revenue_per_period
- float(request.opex)
- float(request.sustaining_capex)
)
cash_flow_models, cash_flow_entries = _generate_cash_flows(
periods=periods,
net_per_period=net_per_period,
capex=capex,
)
# Update per-period entries to include explicit costs for presentation
for entry in cash_flow_entries[1:]:
entry.revenue = revenue_per_period
entry.opex = float(request.opex)
entry.sustaining_capex = float(request.sustaining_capex)
entry.net = net_per_period
discount_rate = (request.discount_rate or 0.0) / 100.0
npv_value = net_present_value(discount_rate, cash_flow_models)
try:
irr_value = internal_rate_of_return(cash_flow_models) * 100.0
except (ValueError, ZeroDivisionError, ConvergenceError):
irr_value = None
try:
payback_value = payback_period(cash_flow_models)
except (ValueError, PaybackNotReachedError):
payback_value = None
total_costs = processing_total + sustaining_total + capex
total_net = revenue_total - total_costs
if revenue_total == 0:
margin_value = None
else:
margin_value = (total_net / revenue_total) * 100.0
currency = request.currency_code or pricing_result.currency
try:
currency = normalise_currency(currency)
except CurrencyValidationError as exc:
raise ProfitabilityValidationError(
str(exc), ["currency_code"]) from exc
costs = ProfitabilityCosts(
opex_total=processing_total,
sustaining_capex_total=sustaining_total,
capex=capex,
)
metrics = ProfitabilityMetrics(
npv=npv_value,
irr=irr_value,
payback_period=payback_value,
margin=margin_value,
)
return ProfitabilityCalculationResult(
pricing=pricing_result,
costs=costs,
metrics=metrics,
cash_flows=cash_flow_entries,
currency=currency,
)
def calculate_initial_capex(
request: CapexCalculationRequest,
) -> CapexCalculationResult:
"""Aggregate capex components into totals and timelines."""
if not request.components:
raise CapexValidationError(
"At least one capex component is required for calculation.",
["components"],
)
parameters = request.parameters
base_currency = parameters.currency_code
if base_currency:
try:
base_currency = normalise_currency(base_currency)
except CurrencyValidationError as exc:
raise CapexValidationError(
str(exc), ["parameters.currency_code"]
) from exc
overall = 0.0
category_totals: dict[str, float] = defaultdict(float)
timeline_totals: dict[int, float] = defaultdict(float)
normalised_components: list[CapexComponentInput] = []
for index, component in enumerate(request.components):
amount = float(component.amount)
overall += amount
category_totals[component.category] += amount
spend_year = component.spend_year or 0
timeline_totals[spend_year] += amount
component_currency = component.currency
if component_currency:
try:
component_currency = normalise_currency(component_currency)
except CurrencyValidationError as exc:
raise CapexValidationError(
str(exc), [f"components[{index}].currency"]
) from exc
if base_currency is None and component_currency:
base_currency = component_currency
elif (
base_currency is not None
and component_currency is not None
and component_currency != base_currency
):
raise CapexValidationError(
(
"Component currency does not match the global currency. "
f"Expected {base_currency}, got {component_currency}."
),
[f"components[{index}].currency"],
)
normalised_components.append(
CapexComponentInput(
id=component.id,
name=component.name,
category=component.category,
amount=amount,
currency=component_currency,
spend_year=component.spend_year,
notes=component.notes,
)
)
contingency_pct = float(parameters.contingency_pct or 0.0)
contingency_amount = overall * (contingency_pct / 100.0)
grand_total = overall + contingency_amount
category_breakdowns: list[CapexCategoryBreakdown] = []
if category_totals:
for category, total in sorted(category_totals.items()):
share = (total / overall * 100.0) if overall else None
category_breakdowns.append(
CapexCategoryBreakdown(
category=category,
amount=total,
share=share,
)
)
cumulative = 0.0
timeline_entries: list[CapexTimelineEntry] = []
for year, spend in sorted(timeline_totals.items()):
cumulative += spend
timeline_entries.append(
CapexTimelineEntry(year=year, spend=spend, cumulative=cumulative)
)
try:
currency = normalise_currency(base_currency) if base_currency else None
except CurrencyValidationError as exc:
raise CapexValidationError(
str(exc), ["parameters.currency_code"]
) from exc
totals = CapexTotals(
overall=overall,
contingency_pct=contingency_pct,
contingency_amount=contingency_amount,
with_contingency=grand_total,
by_category=category_breakdowns,
)
return CapexCalculationResult(
totals=totals,
timeline=timeline_entries,
components=normalised_components,
parameters=parameters,
options=request.options,
currency=currency,
)
def calculate_opex(
request: OpexCalculationRequest,
) -> OpexCalculationResult:
"""Aggregate opex components into annual totals and timeline."""
if not request.components:
raise OpexValidationError(
"At least one opex component is required for calculation.",
["components"],
)
parameters: OpexParameters = request.parameters
base_currency = parameters.currency_code
if base_currency:
try:
base_currency = normalise_currency(base_currency)
except CurrencyValidationError as exc:
raise OpexValidationError(
str(exc), ["parameters.currency_code"]
) from exc
evaluation_horizon = parameters.evaluation_horizon_years or 1
if evaluation_horizon <= 0:
raise OpexValidationError(
"Evaluation horizon must be at least 1 year.",
["parameters.evaluation_horizon_years"],
)
escalation_pct = float(parameters.escalation_pct or 0.0)
apply_escalation = bool(parameters.apply_escalation)
category_totals: dict[str, float] = defaultdict(float)
timeline_totals: dict[int, float] = defaultdict(float)
timeline_escalated: dict[int, float] = defaultdict(float)
normalised_components: list[OpexComponentInput] = []
max_period_end = evaluation_horizon
for index, component in enumerate(request.components):
frequency = component.frequency.lower()
multiplier = _FREQUENCY_MULTIPLIER.get(frequency)
if multiplier is None:
raise OpexValidationError(
f"Unsupported frequency '{component.frequency}'.",
[f"components[{index}].frequency"],
)
unit_cost = float(component.unit_cost)
quantity = float(component.quantity)
annual_cost = unit_cost * quantity * multiplier
period_start = component.period_start or 1
period_end = component.period_end or evaluation_horizon
if period_end < period_start:
raise OpexValidationError(
(
"Component period_end must be greater than or equal to "
"period_start."
),
[f"components[{index}].period_end"],
)
max_period_end = max(max_period_end, period_end)
component_currency = component.currency
if component_currency:
try:
component_currency = normalise_currency(component_currency)
except CurrencyValidationError as exc:
raise OpexValidationError(
str(exc), [f"components[{index}].currency"]
) from exc
if base_currency is None and component_currency:
base_currency = component_currency
elif (
base_currency is not None
and component_currency is not None
and component_currency != base_currency
):
raise OpexValidationError(
(
"Component currency does not match the global currency. "
f"Expected {base_currency}, got {component_currency}."
),
[f"components[{index}].currency"],
)
category_totals[component.category] += annual_cost
for period in range(period_start, period_end + 1):
timeline_totals[period] += annual_cost
normalised_components.append(
OpexComponentInput(
id=component.id,
name=component.name,
category=component.category,
unit_cost=unit_cost,
quantity=quantity,
frequency=frequency,
currency=component_currency,
period_start=period_start,
period_end=period_end,
notes=component.notes,
)
)
evaluation_horizon = max(evaluation_horizon, max_period_end)
try:
currency = normalise_currency(base_currency) if base_currency else None
except CurrencyValidationError as exc:
raise OpexValidationError(
str(exc), ["parameters.currency_code"]
) from exc
timeline_entries: list[OpexTimelineEntry] = []
escalated_values: list[float] = []
overall_annual = timeline_totals.get(1, 0.0)
escalated_total = 0.0
for period in range(1, evaluation_horizon + 1):
base_cost = timeline_totals.get(period, 0.0)
if apply_escalation:
factor = (1 + escalation_pct / 100.0) ** (period - 1)
else:
factor = 1.0
escalated_cost = base_cost * factor
timeline_escalated[period] = escalated_cost
escalated_total += escalated_cost
timeline_entries.append(
OpexTimelineEntry(
period=period,
base_cost=base_cost,
escalated_cost=escalated_cost if apply_escalation else None,
)
)
escalated_values.append(escalated_cost)
category_breakdowns: list[OpexCategoryBreakdown] = []
total_base = sum(category_totals.values())
for category, total in sorted(category_totals.items()):
share = (total / total_base * 100.0) if total_base else None
category_breakdowns.append(
OpexCategoryBreakdown(
category=category,
annual_cost=total,
share=share,
)
)
metrics = OpexMetrics(
annual_average=fmean(escalated_values) if escalated_values else None,
cost_per_ton=None,
)
totals = OpexTotals(
overall_annual=overall_annual,
escalated_total=escalated_total if apply_escalation else None,
escalation_pct=escalation_pct if apply_escalation else None,
by_category=category_breakdowns,
)
return OpexCalculationResult(
totals=totals,
timeline=timeline_entries,
metrics=metrics,
components=normalised_components,
parameters=parameters,
options=request.options,
currency=currency,
)
__all__ = [
"calculate_profitability",
"calculate_initial_capex",
"calculate_opex",
]

43
services/currency.py Normal file
View File

@@ -0,0 +1,43 @@
"""Utilities for currency normalization within pricing and financial workflows."""
from __future__ import annotations
import re
from dataclasses import dataclass
VALID_CURRENCY_PATTERN = re.compile(r"^[A-Z]{3}$")
@dataclass(frozen=True)
class CurrencyValidationError(ValueError):
"""Raised when a currency code fails validation."""
code: str
def __str__(self) -> str: # pragma: no cover - dataclass repr not required in tests
return f"Invalid currency code: {self.code!r}"
def normalise_currency(code: str | None) -> str | None:
"""Normalise currency codes to uppercase ISO-4217 values."""
if code is None:
return None
candidate = code.strip().upper()
if not VALID_CURRENCY_PATTERN.match(candidate):
raise CurrencyValidationError(candidate)
return candidate
def require_currency(code: str | None, default: str | None = None) -> str:
"""Return normalised currency code, falling back to default when missing."""
normalised = normalise_currency(code)
if normalised is not None:
return normalised
if default is None:
raise CurrencyValidationError("<missing currency>")
fallback = normalise_currency(default)
if fallback is None:
raise CurrencyValidationError("<invalid default currency>")
return fallback

61
services/exceptions.py Normal file
View File

@@ -0,0 +1,61 @@
"""Domain-level exceptions for service and repository layers."""
from dataclasses import dataclass
from typing import Sequence
class EntityNotFoundError(Exception):
"""Raised when a requested entity cannot be located."""
class EntityConflictError(Exception):
"""Raised when attempting to create or update an entity that violates uniqueness."""
class AuthorizationError(Exception):
"""Raised when a user lacks permission to perform an action."""
@dataclass(eq=False)
class ScenarioValidationError(Exception):
"""Raised when scenarios fail comparison validation rules."""
code: str
message: str
scenario_ids: Sequence[int] | None = None
def __str__(self) -> str: # pragma: no cover - mirrors message for logging
return self.message
@dataclass(eq=False)
class ProfitabilityValidationError(Exception):
"""Raised when profitability calculation inputs fail domain validation."""
message: str
field_errors: Sequence[str] | None = None
def __str__(self) -> str: # pragma: no cover - mirrors message for logging
return self.message
@dataclass(eq=False)
class CapexValidationError(Exception):
"""Raised when capex calculation inputs fail domain validation."""
message: str
field_errors: Sequence[str] | None = None
def __str__(self) -> str: # pragma: no cover - mirrors message for logging
return self.message
@dataclass(eq=False)
class OpexValidationError(Exception):
"""Raised when opex calculation inputs fail domain validation."""
message: str
field_errors: Sequence[str] | None = None
def __str__(self) -> str: # pragma: no cover - mirrors message for logging
return self.message

121
services/export_query.py Normal file
View File

@@ -0,0 +1,121 @@
from __future__ import annotations
from dataclasses import dataclass
from datetime import date, datetime
from typing import Iterable
from models import MiningOperationType, ResourceType, ScenarioStatus
from services.currency import CurrencyValidationError, normalise_currency
def _normalise_lower_strings(values: Iterable[str]) -> tuple[str, ...]:
unique: set[str] = set()
for value in values:
if not value:
continue
trimmed = value.strip().lower()
if not trimmed:
continue
unique.add(trimmed)
return tuple(sorted(unique))
def _normalise_upper_strings(values: Iterable[str | None]) -> tuple[str, ...]:
unique: set[str] = set()
for value in values:
if value is None:
continue
candidate = value if isinstance(value, str) else str(value)
candidate = candidate.strip()
if not candidate:
continue
try:
normalised = normalise_currency(candidate)
except CurrencyValidationError as exc:
raise ValueError(str(exc)) from exc
if normalised is None:
continue
unique.add(normalised)
return tuple(sorted(unique))
@dataclass(slots=True, frozen=True)
class ProjectExportFilters:
"""Filter parameters for project export queries."""
ids: tuple[int, ...] = ()
names: tuple[str, ...] = ()
name_contains: str | None = None
locations: tuple[str, ...] = ()
operation_types: tuple[MiningOperationType, ...] = ()
created_from: datetime | None = None
created_to: datetime | None = None
updated_from: datetime | None = None
updated_to: datetime | None = None
def normalised_ids(self) -> tuple[int, ...]:
unique = {identifier for identifier in self.ids if identifier > 0}
return tuple(sorted(unique))
def normalised_names(self) -> tuple[str, ...]:
return _normalise_lower_strings(self.names)
def normalised_locations(self) -> tuple[str, ...]:
return _normalise_lower_strings(self.locations)
def name_search_pattern(self) -> str | None:
if not self.name_contains:
return None
pattern = self.name_contains.strip()
if not pattern:
return None
return f"%{pattern}%"
@dataclass(slots=True, frozen=True)
class ScenarioExportFilters:
"""Filter parameters for scenario export queries."""
ids: tuple[int, ...] = ()
project_ids: tuple[int, ...] = ()
project_names: tuple[str, ...] = ()
name_contains: str | None = None
statuses: tuple[ScenarioStatus, ...] = ()
start_date_from: date | None = None
start_date_to: date | None = None
end_date_from: date | None = None
end_date_to: date | None = None
created_from: datetime | None = None
created_to: datetime | None = None
updated_from: datetime | None = None
updated_to: datetime | None = None
currencies: tuple[str, ...] = ()
primary_resources: tuple[ResourceType, ...] = ()
def normalised_ids(self) -> tuple[int, ...]:
unique = {identifier for identifier in self.ids if identifier > 0}
return tuple(sorted(unique))
def normalised_project_ids(self) -> tuple[int, ...]:
unique = {identifier for identifier in self.project_ids if identifier > 0}
return tuple(sorted(unique))
def normalised_project_names(self) -> tuple[str, ...]:
return _normalise_lower_strings(self.project_names)
def name_search_pattern(self) -> str | None:
if not self.name_contains:
return None
pattern = self.name_contains.strip()
if not pattern:
return None
return f"%{pattern}%"
def normalised_currencies(self) -> tuple[str, ...]:
return _normalise_upper_strings(self.currencies)
__all__ = (
"ProjectExportFilters",
"ScenarioExportFilters",
)

View File

@@ -0,0 +1,351 @@
from __future__ import annotations
import csv
from dataclasses import dataclass, field
from datetime import date, datetime, timezone
from decimal import Decimal, InvalidOperation, ROUND_HALF_UP
from enum import Enum
from io import BytesIO, StringIO
from typing import Any, Callable, Iterable, Iterator, Mapping, Sequence
from openpyxl import Workbook
CSVValueFormatter = Callable[[Any], str]
Accessor = Callable[[Any], Any]
__all__ = [
"CSVExportColumn",
"CSVExporter",
"default_project_columns",
"default_scenario_columns",
"stream_projects_to_csv",
"stream_scenarios_to_csv",
"ExcelExporter",
"export_projects_to_excel",
"export_scenarios_to_excel",
"default_formatter",
"format_datetime_utc",
"format_date_iso",
"format_decimal",
]
@dataclass(slots=True)
class CSVExportColumn:
"""Declarative description of a CSV export column."""
header: str
accessor: Accessor | str
formatter: CSVValueFormatter | None = None
required: bool = False
_accessor: Accessor = field(init=False, repr=False)
def __post_init__(self) -> None:
object.__setattr__(self, "_accessor", _coerce_accessor(self.accessor))
def value_for(self, entity: Any) -> Any:
accessor = object.__getattribute__(self, "_accessor")
try:
return accessor(entity)
except Exception: # pragma: no cover - defensive safeguard
return None
class CSVExporter:
"""Stream Python objects as UTF-8 encoded CSV rows."""
def __init__(
self,
columns: Sequence[CSVExportColumn],
*,
include_header: bool = True,
line_terminator: str = "\n",
) -> None:
if not columns:
raise ValueError("At least one column is required for CSV export.")
self._columns: tuple[CSVExportColumn, ...] = tuple(columns)
self._include_header = include_header
self._line_terminator = line_terminator
@property
def columns(self) -> tuple[CSVExportColumn, ...]:
return self._columns
def headers(self) -> tuple[str, ...]:
return tuple(column.header for column in self._columns)
def iter_bytes(self, records: Iterable[Any]) -> Iterator[bytes]:
buffer = StringIO()
writer = csv.writer(buffer, lineterminator=self._line_terminator)
if self._include_header:
writer.writerow(self.headers())
yield _drain_buffer(buffer)
for record in records:
writer.writerow(self._format_row(record))
yield _drain_buffer(buffer)
def _format_row(self, record: Any) -> list[str]:
formatted: list[str] = []
for column in self._columns:
raw_value = column.value_for(record)
formatter = column.formatter or default_formatter
formatted.append(formatter(raw_value))
return formatted
def default_project_columns(
*,
include_description: bool = True,
include_timestamps: bool = True,
) -> tuple[CSVExportColumn, ...]:
columns: list[CSVExportColumn] = [
CSVExportColumn("name", "name", required=True),
CSVExportColumn("location", "location"),
CSVExportColumn("operation_type", "operation_type"),
]
if include_description:
columns.append(CSVExportColumn("description", "description"))
if include_timestamps:
columns.extend(
(
CSVExportColumn("created_at", "created_at",
formatter=format_datetime_utc),
CSVExportColumn("updated_at", "updated_at",
formatter=format_datetime_utc),
)
)
return tuple(columns)
def default_scenario_columns(
*,
include_description: bool = True,
include_timestamps: bool = True,
) -> tuple[CSVExportColumn, ...]:
columns: list[CSVExportColumn] = [
CSVExportColumn(
"project_name",
lambda scenario: getattr(
getattr(scenario, "project", None), "name", None),
required=True,
),
CSVExportColumn("name", "name", required=True),
CSVExportColumn("status", "status"),
CSVExportColumn("start_date", "start_date", formatter=format_date_iso),
CSVExportColumn("end_date", "end_date", formatter=format_date_iso),
CSVExportColumn("discount_rate", "discount_rate",
formatter=format_decimal),
CSVExportColumn("currency", "currency"),
CSVExportColumn("primary_resource", "primary_resource"),
]
if include_description:
columns.append(CSVExportColumn("description", "description"))
if include_timestamps:
columns.extend(
(
CSVExportColumn("created_at", "created_at",
formatter=format_datetime_utc),
CSVExportColumn("updated_at", "updated_at",
formatter=format_datetime_utc),
)
)
return tuple(columns)
def stream_projects_to_csv(
projects: Iterable[Any],
*,
columns: Sequence[CSVExportColumn] | None = None,
) -> Iterator[bytes]:
resolved_columns = tuple(columns or default_project_columns())
exporter = CSVExporter(resolved_columns)
yield from exporter.iter_bytes(projects)
def stream_scenarios_to_csv(
scenarios: Iterable[Any],
*,
columns: Sequence[CSVExportColumn] | None = None,
) -> Iterator[bytes]:
resolved_columns = tuple(columns or default_scenario_columns())
exporter = CSVExporter(resolved_columns)
yield from exporter.iter_bytes(scenarios)
def default_formatter(value: Any) -> str:
if value is None:
return ""
if isinstance(value, Enum):
return str(value.value)
if isinstance(value, Decimal):
return format_decimal(value)
if isinstance(value, datetime):
return format_datetime_utc(value)
if isinstance(value, date):
return format_date_iso(value)
if isinstance(value, bool):
return "true" if value else "false"
return str(value)
def format_datetime_utc(value: Any) -> str:
if not isinstance(value, datetime):
return ""
if value.tzinfo is None:
value = value.replace(tzinfo=timezone.utc)
value = value.astimezone(timezone.utc)
return value.isoformat().replace("+00:00", "Z")
def format_date_iso(value: Any) -> str:
if not isinstance(value, date):
return ""
return value.isoformat()
def format_decimal(value: Any) -> str:
if value is None:
return ""
if isinstance(value, Decimal):
try:
quantised = value.quantize(Decimal("0.01"), rounding=ROUND_HALF_UP)
except InvalidOperation: # pragma: no cover - unexpected precision issues
quantised = value
return format(quantised, "f")
if isinstance(value, (int, float)):
return f"{value:.2f}"
return default_formatter(value)
class ExcelExporter:
"""Produce Excel workbooks via write-only streaming."""
def __init__(
self,
columns: Sequence[CSVExportColumn],
*,
sheet_name: str = "Export",
workbook_title: str | None = None,
include_header: bool = True,
metadata: Mapping[str, Any] | None = None,
metadata_sheet_name: str = "Metadata",
) -> None:
if not columns:
raise ValueError(
"At least one column is required for Excel export.")
self._columns: tuple[CSVExportColumn, ...] = tuple(columns)
self._sheet_name = sheet_name or "Export"
self._include_header = include_header
self._metadata = dict(metadata) if metadata else None
self._metadata_sheet_name = metadata_sheet_name or "Metadata"
self._workbook = Workbook(write_only=True)
if workbook_title:
self._workbook.properties.title = workbook_title
def export(self, records: Iterable[Any]) -> bytes:
sheet = self._workbook.create_sheet(title=self._sheet_name)
if self._include_header:
sheet.append([column.header for column in self._columns])
for record in records:
sheet.append(self._format_row(record))
self._append_metadata_sheet()
return self._finalize()
def _format_row(self, record: Any) -> list[Any]:
row: list[Any] = []
for column in self._columns:
raw_value = column.value_for(record)
formatter = column.formatter or default_formatter
row.append(formatter(raw_value))
return row
def _append_metadata_sheet(self) -> None:
if not self._metadata:
return
sheet_name = self._metadata_sheet_name
existing = set(self._workbook.sheetnames)
if sheet_name in existing:
index = 1
while True:
candidate = f"{sheet_name}_{index}"
if candidate not in existing:
sheet_name = candidate
break
index += 1
meta_ws = self._workbook.create_sheet(title=sheet_name)
meta_ws.append(["Key", "Value"])
for key, value in self._metadata.items():
meta_ws.append([
str(key),
"" if value is None else str(value),
])
def _finalize(self) -> bytes:
buffer = BytesIO()
self._workbook.save(buffer)
buffer.seek(0)
return buffer.getvalue()
def export_projects_to_excel(
projects: Iterable[Any],
*,
columns: Sequence[CSVExportColumn] | None = None,
sheet_name: str = "Projects",
workbook_title: str | None = None,
metadata: Mapping[str, Any] | None = None,
) -> bytes:
exporter = ExcelExporter(
columns or default_project_columns(),
sheet_name=sheet_name,
workbook_title=workbook_title,
metadata=metadata,
)
return exporter.export(projects)
def export_scenarios_to_excel(
scenarios: Iterable[Any],
*,
columns: Sequence[CSVExportColumn] | None = None,
sheet_name: str = "Scenarios",
workbook_title: str | None = None,
metadata: Mapping[str, Any] | None = None,
) -> bytes:
exporter = ExcelExporter(
columns or default_scenario_columns(),
sheet_name=sheet_name,
workbook_title=workbook_title,
metadata=metadata,
)
return exporter.export(scenarios)
def _coerce_accessor(accessor: Accessor | str) -> Accessor:
if callable(accessor):
return accessor
path = [segment for segment in accessor.split(".") if segment]
def _resolve(entity: Any) -> Any:
current: Any = entity
for segment in path:
if current is None:
return None
current = getattr(current, segment, None)
return current
return _resolve
def _drain_buffer(buffer: StringIO) -> bytes:
data = buffer.getvalue()
buffer.seek(0)
buffer.truncate(0)
return data.encode("utf-8")

252
services/financial.py Normal file
View File

@@ -0,0 +1,252 @@
"""Financial calculation helpers for project evaluation metrics."""
from __future__ import annotations
from dataclasses import dataclass
from datetime import date, datetime
from math import isclose, isfinite
from typing import Iterable, List, Sequence, Tuple
Number = float
@dataclass(frozen=True, slots=True)
class CashFlow:
"""Represents a dated cash flow in scenario currency."""
amount: Number
period_index: int | None = None
date: date | datetime | None = None
class ConvergenceError(RuntimeError):
"""Raised when an iterative solver fails to converge."""
class PaybackNotReachedError(RuntimeError):
"""Raised when cumulative cash flows never reach a non-negative total."""
def _coerce_date(value: date | datetime) -> date:
if isinstance(value, datetime):
return value.date()
return value
def normalize_cash_flows(
cash_flows: Iterable[CashFlow],
*,
compounds_per_year: int = 1,
) -> List[Tuple[Number, float]]:
"""Normalise cash flows to ``(amount, periods)`` tuples.
When explicit ``period_index`` values are provided they take precedence. If
only dates are supplied, the first dated cash flow anchors the timeline and
subsequent cash flows convert their day offsets into fractional periods
based on ``compounds_per_year``. When neither a period index nor a date is
present, cash flows are treated as sequential periods in input order.
"""
flows: Sequence[CashFlow] = list(cash_flows)
if not flows:
return []
if compounds_per_year <= 0:
raise ValueError("compounds_per_year must be a positive integer")
base_date: date | None = None
for flow in flows:
if flow.date is not None:
base_date = _coerce_date(flow.date)
break
normalised: List[Tuple[Number, float]] = []
for idx, flow in enumerate(flows):
amount = float(flow.amount)
if flow.period_index is not None:
periods = float(flow.period_index)
elif flow.date is not None and base_date is not None:
current_date = _coerce_date(flow.date)
delta_days = (current_date - base_date).days
period_length_days = 365.0 / float(compounds_per_year)
periods = delta_days / period_length_days
else:
periods = float(idx)
normalised.append((amount, periods))
return normalised
def discount_factor(rate: Number, periods: float, *, compounds_per_year: int = 1) -> float:
"""Return the factor used to discount a value ``periods`` steps in the future."""
if compounds_per_year <= 0:
raise ValueError("compounds_per_year must be a positive integer")
periodic_rate = rate / float(compounds_per_year)
return (1.0 + periodic_rate) ** (-periods)
def net_present_value(
rate: Number,
cash_flows: Iterable[CashFlow],
*,
residual_value: Number | None = None,
residual_periods: float | None = None,
compounds_per_year: int = 1,
) -> float:
"""Calculate Net Present Value for ``cash_flows``.
``rate`` is a decimal (``0.1`` for 10%). Cash flows are discounted using the
given compounding frequency. When ``residual_value`` is provided it is
discounted at ``residual_periods`` periods; by default the value occurs one
period after the final cash flow.
"""
normalised = normalize_cash_flows(
cash_flows,
compounds_per_year=compounds_per_year,
)
if not normalised and residual_value is None:
return 0.0
total = 0.0
for amount, periods in normalised:
factor = discount_factor(
rate, periods, compounds_per_year=compounds_per_year)
total += amount * factor
if residual_value is not None:
if residual_periods is None:
last_period = normalised[-1][1] if normalised else 0.0
residual_periods = last_period + 1.0
factor = discount_factor(
rate, residual_periods, compounds_per_year=compounds_per_year)
total += float(residual_value) * factor
return total
def internal_rate_of_return(
cash_flows: Iterable[CashFlow],
*,
guess: Number = 0.1,
max_iterations: int = 100,
tolerance: float = 1e-6,
compounds_per_year: int = 1,
) -> float:
"""Return the internal rate of return for ``cash_flows``.
Uses Newton-Raphson iteration with a bracketed fallback when the derivative
becomes unstable. Raises :class:`ConvergenceError` if no root is found.
"""
flows = normalize_cash_flows(
cash_flows,
compounds_per_year=compounds_per_year,
)
if not flows:
raise ValueError("cash_flows must contain at least one item")
amounts = [amount for amount, _ in flows]
if not any(amount < 0 for amount in amounts) or not any(amount > 0 for amount in amounts):
raise ValueError(
"cash_flows must include both negative and positive values")
def _npv_with_flows(rate: float) -> float:
periodic_rate = rate / float(compounds_per_year)
if periodic_rate <= -1.0:
return float("inf")
total = 0.0
for amount, periods in flows:
factor = (1.0 + periodic_rate) ** (-periods)
total += amount * factor
return total
def _derivative(rate: float) -> float:
periodic_rate = rate / float(compounds_per_year)
if periodic_rate <= -1.0:
return float("inf")
derivative = 0.0
for amount, periods in flows:
factor = (1.0 + periodic_rate) ** (-periods - 1.0)
derivative += -amount * periods * \
factor / float(compounds_per_year)
return derivative
rate = float(guess)
for _ in range(max_iterations):
value = _npv_with_flows(rate)
if isclose(value, 0.0, abs_tol=tolerance):
return rate
derivative = _derivative(rate)
if derivative == 0.0 or not isfinite(derivative):
break
next_rate = rate - value / derivative
if abs(next_rate - rate) < tolerance:
return next_rate
rate = next_rate
# Fallback to bracketed bisection between sensible bounds.
lower_bound = -0.99 * float(compounds_per_year)
upper_bound = 10.0
lower_value = _npv_with_flows(lower_bound)
upper_value = _npv_with_flows(upper_bound)
attempts = 0
while lower_value * upper_value > 0 and attempts < 12:
upper_bound *= 2.0
upper_value = _npv_with_flows(upper_bound)
attempts += 1
if lower_value * upper_value > 0:
raise ConvergenceError(
"IRR could not be bracketed within default bounds")
for _ in range(max_iterations * 2):
midpoint = (lower_bound + upper_bound) / 2.0
mid_value = _npv_with_flows(midpoint)
if isclose(mid_value, 0.0, abs_tol=tolerance):
return midpoint
if lower_value * mid_value < 0:
upper_bound = midpoint
upper_value = mid_value
else:
lower_bound = midpoint
lower_value = mid_value
raise ConvergenceError("IRR solver failed to converge")
def payback_period(
cash_flows: Iterable[CashFlow],
*,
allow_fractional: bool = True,
compounds_per_year: int = 1,
) -> float:
"""Return the period index where cumulative cash flow becomes non-negative."""
flows = normalize_cash_flows(
cash_flows,
compounds_per_year=compounds_per_year,
)
if not flows:
raise ValueError("cash_flows must contain at least one item")
flows = sorted(flows, key=lambda item: item[1])
cumulative = 0.0
previous_period = flows[0][1]
for index, (amount, periods) in enumerate(flows):
next_cumulative = cumulative + amount
if next_cumulative >= 0.0:
if not allow_fractional or isclose(amount, 0.0):
return periods
prev_period = previous_period if index > 0 else periods
fraction = -cumulative / amount
return prev_period + fraction * (periods - prev_period)
cumulative = next_cumulative
previous_period = periods
raise PaybackNotReachedError(
"Cumulative cash flow never becomes non-negative")

905
services/importers.py Normal file
View File

@@ -0,0 +1,905 @@
from __future__ import annotations
import logging
import time
from dataclasses import dataclass
from enum import Enum
from pathlib import Path
from typing import Any, BinaryIO, Callable, Generic, Iterable, Mapping, Optional, TypeVar, cast
from uuid import uuid4
from types import MappingProxyType
import pandas as pd
from pandas import DataFrame
from pydantic import BaseModel, ValidationError
from models import Project, Scenario
from schemas.imports import ProjectImportRow, ScenarioImportRow
from services.unit_of_work import UnitOfWork
from models.import_export_log import ImportExportLog
from monitoring.metrics import observe_import
logger = logging.getLogger(__name__)
TImportRow = TypeVar("TImportRow", bound=BaseModel)
PROJECT_COLUMNS: tuple[str, ...] = (
"name",
"location",
"operation_type",
"description",
"created_at",
"updated_at",
)
SCENARIO_COLUMNS: tuple[str, ...] = (
"project_name",
"name",
"status",
"start_date",
"end_date",
"discount_rate",
"currency",
"primary_resource",
"description",
"created_at",
"updated_at",
)
@dataclass(slots=True)
class ImportRowError:
row_number: int
field: str | None
message: str
@dataclass(slots=True)
class ParsedImportRow(Generic[TImportRow]):
row_number: int
data: TImportRow
@dataclass(slots=True)
class ImportResult(Generic[TImportRow]):
rows: list[ParsedImportRow[TImportRow]]
errors: list[ImportRowError]
class UnsupportedImportFormat(ValueError):
pass
class ImportPreviewState(str, Enum):
NEW = "new"
UPDATE = "update"
SKIP = "skip"
ERROR = "error"
@dataclass(slots=True)
class ImportPreviewRow(Generic[TImportRow]):
row_number: int
data: TImportRow
state: ImportPreviewState
issues: list[str]
context: dict[str, Any] | None = None
@dataclass(slots=True)
class ImportPreviewSummary:
total_rows: int
accepted: int
skipped: int
errored: int
@dataclass(slots=True)
class ImportPreview(Generic[TImportRow]):
rows: list[ImportPreviewRow[TImportRow]]
summary: ImportPreviewSummary
row_issues: list["ImportPreviewRowIssues"]
parser_errors: list[ImportRowError]
stage_token: str | None
@dataclass(slots=True)
class StagedRow(Generic[TImportRow]):
parsed: ParsedImportRow[TImportRow]
context: dict[str, Any]
@dataclass(slots=True)
class ImportPreviewRowIssue:
message: str
field: str | None = None
@dataclass(slots=True)
class ImportPreviewRowIssues:
row_number: int
state: ImportPreviewState | None
issues: list[ImportPreviewRowIssue]
@dataclass(slots=True)
class StagedImport(Generic[TImportRow]):
token: str
rows: list[StagedRow[TImportRow]]
@dataclass(slots=True, frozen=True)
class StagedRowView(Generic[TImportRow]):
row_number: int
data: TImportRow
context: Mapping[str, Any]
@dataclass(slots=True, frozen=True)
class StagedImportView(Generic[TImportRow]):
token: str
rows: tuple[StagedRowView[TImportRow], ...]
@dataclass(slots=True, frozen=True)
class ImportCommitSummary:
created: int
updated: int
@dataclass(slots=True, frozen=True)
class ImportCommitResult(Generic[TImportRow]):
token: str
rows: tuple[StagedRowView[TImportRow], ...]
summary: ImportCommitSummary
UnitOfWorkFactory = Callable[[], UnitOfWork]
class ImportIngestionService:
"""Coordinates parsing, validation, and preview staging for imports."""
def __init__(self, uow_factory: UnitOfWorkFactory) -> None:
self._uow_factory = uow_factory
self._project_stage: dict[str, StagedImport[ProjectImportRow]] = {}
self._scenario_stage: dict[str, StagedImport[ScenarioImportRow]] = {}
def preview_projects(
self,
stream: BinaryIO,
filename: str,
) -> ImportPreview[ProjectImportRow]:
start = time.perf_counter()
result = load_project_imports(stream, filename)
status = "success" if not result.errors else "partial"
self._record_audit_log(
action="preview",
dataset="projects",
status=status,
filename=filename,
row_count=len(result.rows),
detail=f"accepted={len(result.rows)} parser_errors={len(result.errors)}",
)
observe_import(
action="preview",
dataset="projects",
status=status,
seconds=time.perf_counter() - start,
)
logger.info(
"import.preview",
extra={
"event": "import.preview",
"dataset": "projects",
"status": status,
"filename": filename,
"row_count": len(result.rows),
"error_count": len(result.errors),
},
)
parser_errors = result.errors
preview_rows: list[ImportPreviewRow[ProjectImportRow]] = []
staged_rows: list[StagedRow[ProjectImportRow]] = []
accepted = skipped = errored = 0
seen_names: set[str] = set()
existing_by_name: dict[str, Project] = {}
if result.rows:
with self._uow_factory() as uow:
if not uow.projects:
raise RuntimeError("Project repository is unavailable")
existing_by_name = dict(
uow.projects.find_by_names(
parsed.data.name for parsed in result.rows
)
)
for parsed in result.rows:
name_key = _normalise_key(parsed.data.name)
issues: list[str] = []
context: dict[str, Any] | None = None
state = ImportPreviewState.NEW
if name_key in seen_names:
state = ImportPreviewState.SKIP
issues.append(
"Duplicate project name within upload; row skipped.")
else:
seen_names.add(name_key)
existing = existing_by_name.get(name_key)
if existing:
state = ImportPreviewState.UPDATE
context = {
"mode": "update",
"project_id": existing.id,
}
issues.append("Existing project will be updated.")
else:
context = {"mode": "create"}
preview_rows.append(
ImportPreviewRow(
row_number=parsed.row_number,
data=parsed.data,
state=state,
issues=issues,
context=context,
)
)
if state in {ImportPreviewState.NEW, ImportPreviewState.UPDATE}:
accepted += 1
staged_rows.append(
StagedRow(parsed=parsed, context=context or {
"mode": "create"})
)
elif state == ImportPreviewState.SKIP:
skipped += 1
else:
errored += 1
parser_error_rows = {error.row_number for error in parser_errors}
errored += len(parser_error_rows)
total_rows = len(preview_rows) + len(parser_error_rows)
summary = ImportPreviewSummary(
total_rows=total_rows,
accepted=accepted,
skipped=skipped,
errored=errored,
)
row_issues = _compile_row_issues(preview_rows, parser_errors)
stage_token: str | None = None
if staged_rows:
stage_token = self._store_project_stage(staged_rows)
return ImportPreview(
rows=preview_rows,
summary=summary,
row_issues=row_issues,
parser_errors=parser_errors,
stage_token=stage_token,
)
def preview_scenarios(
self,
stream: BinaryIO,
filename: str,
) -> ImportPreview[ScenarioImportRow]:
start = time.perf_counter()
result = load_scenario_imports(stream, filename)
status = "success" if not result.errors else "partial"
self._record_audit_log(
action="preview",
dataset="scenarios",
status=status,
filename=filename,
row_count=len(result.rows),
detail=f"accepted={len(result.rows)} parser_errors={len(result.errors)}",
)
observe_import(
action="preview",
dataset="scenarios",
status=status,
seconds=time.perf_counter() - start,
)
logger.info(
"import.preview",
extra={
"event": "import.preview",
"dataset": "scenarios",
"status": status,
"filename": filename,
"row_count": len(result.rows),
"error_count": len(result.errors),
},
)
parser_errors = result.errors
preview_rows: list[ImportPreviewRow[ScenarioImportRow]] = []
staged_rows: list[StagedRow[ScenarioImportRow]] = []
accepted = skipped = errored = 0
seen_pairs: set[tuple[str, str]] = set()
existing_projects: dict[str, Project] = {}
existing_scenarios: dict[tuple[int, str], Scenario] = {}
if result.rows:
with self._uow_factory() as uow:
if not uow.projects or not uow.scenarios:
raise RuntimeError("Repositories are unavailable")
existing_projects = dict(
uow.projects.find_by_names(
parsed.data.project_name for parsed in result.rows
)
)
names_by_project: dict[int, set[str]] = {}
for parsed in result.rows:
project = existing_projects.get(
_normalise_key(parsed.data.project_name)
)
if not project:
continue
names_by_project.setdefault(project.id, set()).add(
_normalise_key(parsed.data.name)
)
for project_id, names in names_by_project.items():
matches = uow.scenarios.find_by_project_and_names(
project_id, names)
for name_key, scenario in matches.items():
existing_scenarios[(project_id, name_key)] = scenario
for parsed in result.rows:
project_key = _normalise_key(parsed.data.project_name)
scenario_key = _normalise_key(parsed.data.name)
issues: list[str] = []
context: dict[str, Any] | None = None
state = ImportPreviewState.NEW
if (project_key, scenario_key) in seen_pairs:
state = ImportPreviewState.SKIP
issues.append(
"Duplicate scenario for project within upload; row skipped."
)
else:
seen_pairs.add((project_key, scenario_key))
project = existing_projects.get(project_key)
if not project:
state = ImportPreviewState.ERROR
issues.append(
f"Project '{parsed.data.project_name}' does not exist."
)
else:
context = {"mode": "create", "project_id": project.id}
existing = existing_scenarios.get(
(project.id, scenario_key))
if existing:
state = ImportPreviewState.UPDATE
context = {
"mode": "update",
"project_id": project.id,
"scenario_id": existing.id,
}
issues.append("Existing scenario will be updated.")
preview_rows.append(
ImportPreviewRow(
row_number=parsed.row_number,
data=parsed.data,
state=state,
issues=issues,
context=context,
)
)
if state in {ImportPreviewState.NEW, ImportPreviewState.UPDATE}:
accepted += 1
staged_rows.append(
StagedRow(parsed=parsed, context=context or {
"mode": "create"})
)
elif state == ImportPreviewState.SKIP:
skipped += 1
else:
errored += 1
parser_error_rows = {error.row_number for error in parser_errors}
errored += len(parser_error_rows)
total_rows = len(preview_rows) + len(parser_error_rows)
summary = ImportPreviewSummary(
total_rows=total_rows,
accepted=accepted,
skipped=skipped,
errored=errored,
)
row_issues = _compile_row_issues(preview_rows, parser_errors)
stage_token: str | None = None
if staged_rows:
stage_token = self._store_scenario_stage(staged_rows)
return ImportPreview(
rows=preview_rows,
summary=summary,
row_issues=row_issues,
parser_errors=parser_errors,
stage_token=stage_token,
)
def get_staged_projects(
self, token: str
) -> StagedImportView[ProjectImportRow] | None:
staged = self._project_stage.get(token)
if not staged:
return None
return _build_staged_view(staged)
def get_staged_scenarios(
self, token: str
) -> StagedImportView[ScenarioImportRow] | None:
staged = self._scenario_stage.get(token)
if not staged:
return None
return _build_staged_view(staged)
def consume_staged_projects(
self, token: str
) -> StagedImportView[ProjectImportRow] | None:
staged = self._project_stage.pop(token, None)
if not staged:
return None
return _build_staged_view(staged)
def consume_staged_scenarios(
self, token: str
) -> StagedImportView[ScenarioImportRow] | None:
staged = self._scenario_stage.pop(token, None)
if not staged:
return None
return _build_staged_view(staged)
def clear_staged_projects(self, token: str) -> bool:
return self._project_stage.pop(token, None) is not None
def clear_staged_scenarios(self, token: str) -> bool:
return self._scenario_stage.pop(token, None) is not None
def commit_project_import(self, token: str) -> ImportCommitResult[ProjectImportRow]:
staged = self._project_stage.get(token)
if not staged:
raise ValueError(f"Unknown project import token: {token}")
staged_view = _build_staged_view(staged)
created = updated = 0
start = time.perf_counter()
try:
with self._uow_factory() as uow:
if not uow.projects:
raise RuntimeError("Project repository is unavailable")
for row in staged.rows:
mode = row.context.get("mode")
data = row.parsed.data
if mode == "create":
project = Project(
name=data.name,
location=data.location,
operation_type=data.operation_type,
description=data.description,
)
if data.created_at:
project.created_at = data.created_at
if data.updated_at:
project.updated_at = data.updated_at
uow.projects.create(project)
created += 1
elif mode == "update":
project_id = row.context.get("project_id")
if not project_id:
raise ValueError(
"Staged project update is missing project_id context"
)
project = uow.projects.get(project_id)
project.name = data.name
project.location = data.location
project.operation_type = data.operation_type
project.description = data.description
if data.created_at:
project.created_at = data.created_at
if data.updated_at:
project.updated_at = data.updated_at
updated += 1
else:
raise ValueError(
f"Unsupported staged project mode: {mode!r}")
except Exception as exc:
self._record_audit_log(
action="commit",
dataset="projects",
status="failure",
filename=None,
row_count=len(staged.rows),
detail=f"error={type(exc).__name__}: {exc}",
)
observe_import(
action="commit",
dataset="projects",
status="failure",
seconds=time.perf_counter() - start,
)
logger.exception(
"import.commit.failed",
extra={
"event": "import.commit",
"dataset": "projects",
"status": "failure",
"row_count": len(staged.rows),
"token": token,
},
)
raise
else:
self._record_audit_log(
action="commit",
dataset="projects",
status="success",
filename=None,
row_count=len(staged.rows),
detail=f"created={created} updated={updated}",
)
observe_import(
action="commit",
dataset="projects",
status="success",
seconds=time.perf_counter() - start,
)
logger.info(
"import.commit",
extra={
"event": "import.commit",
"dataset": "projects",
"status": "success",
"row_count": len(staged.rows),
"created": created,
"updated": updated,
"token": token,
},
)
self._project_stage.pop(token, None)
return ImportCommitResult(
token=token,
rows=staged_view.rows,
summary=ImportCommitSummary(created=created, updated=updated),
)
def commit_scenario_import(self, token: str) -> ImportCommitResult[ScenarioImportRow]:
staged = self._scenario_stage.get(token)
if not staged:
raise ValueError(f"Unknown scenario import token: {token}")
staged_view = _build_staged_view(staged)
created = updated = 0
start = time.perf_counter()
try:
with self._uow_factory() as uow:
if not uow.scenarios or not uow.projects:
raise RuntimeError("Scenario repositories are unavailable")
for row in staged.rows:
mode = row.context.get("mode")
data = row.parsed.data
project_id = row.context.get("project_id")
if not project_id:
raise ValueError(
"Staged scenario row is missing project_id context"
)
project = uow.projects.get(project_id)
if mode == "create":
scenario = Scenario(
project_id=project.id,
name=data.name,
status=data.status,
start_date=data.start_date,
end_date=data.end_date,
discount_rate=data.discount_rate,
currency=data.currency,
primary_resource=data.primary_resource,
description=data.description,
)
if data.created_at:
scenario.created_at = data.created_at
if data.updated_at:
scenario.updated_at = data.updated_at
uow.scenarios.create(scenario)
created += 1
elif mode == "update":
scenario_id = row.context.get("scenario_id")
if not scenario_id:
raise ValueError(
"Staged scenario update is missing scenario_id context"
)
scenario = uow.scenarios.get(scenario_id)
scenario.project_id = project.id
scenario.name = data.name
scenario.status = data.status
scenario.start_date = data.start_date
scenario.end_date = data.end_date
scenario.discount_rate = data.discount_rate
scenario.currency = data.currency
scenario.primary_resource = data.primary_resource
scenario.description = data.description
if data.created_at:
scenario.created_at = data.created_at
if data.updated_at:
scenario.updated_at = data.updated_at
updated += 1
else:
raise ValueError(
f"Unsupported staged scenario mode: {mode!r}")
except Exception as exc:
self._record_audit_log(
action="commit",
dataset="scenarios",
status="failure",
filename=None,
row_count=len(staged.rows),
detail=f"error={type(exc).__name__}: {exc}",
)
observe_import(
action="commit",
dataset="scenarios",
status="failure",
seconds=time.perf_counter() - start,
)
logger.exception(
"import.commit.failed",
extra={
"event": "import.commit",
"dataset": "scenarios",
"status": "failure",
"row_count": len(staged.rows),
"token": token,
},
)
raise
else:
self._record_audit_log(
action="commit",
dataset="scenarios",
status="success",
filename=None,
row_count=len(staged.rows),
detail=f"created={created} updated={updated}",
)
observe_import(
action="commit",
dataset="scenarios",
status="success",
seconds=time.perf_counter() - start,
)
logger.info(
"import.commit",
extra={
"event": "import.commit",
"dataset": "scenarios",
"status": "success",
"row_count": len(staged.rows),
"created": created,
"updated": updated,
"token": token,
},
)
self._scenario_stage.pop(token, None)
return ImportCommitResult(
token=token,
rows=staged_view.rows,
summary=ImportCommitSummary(created=created, updated=updated),
)
def _record_audit_log(
self,
*,
action: str,
dataset: str,
status: str,
row_count: int,
detail: Optional[str],
filename: Optional[str],
) -> None:
try:
with self._uow_factory() as uow:
if uow.session is None:
return
log = ImportExportLog(
action=action,
dataset=dataset,
status=status,
filename=filename,
row_count=row_count,
detail=detail,
)
uow.session.add(log)
uow.commit()
except Exception:
# Audit logging must not break core workflows
pass
def _store_project_stage(
self, rows: list[StagedRow[ProjectImportRow]]
) -> str:
token = str(uuid4())
self._project_stage[token] = StagedImport(token=token, rows=rows)
return token
def _store_scenario_stage(
self, rows: list[StagedRow[ScenarioImportRow]]
) -> str:
token = str(uuid4())
self._scenario_stage[token] = StagedImport(token=token, rows=rows)
return token
def load_project_imports(stream: BinaryIO, filename: str) -> ImportResult[ProjectImportRow]:
df = _load_dataframe(stream, filename)
return _parse_dataframe(df, ProjectImportRow, PROJECT_COLUMNS)
def load_scenario_imports(stream: BinaryIO, filename: str) -> ImportResult[ScenarioImportRow]:
df = _load_dataframe(stream, filename)
return _parse_dataframe(df, ScenarioImportRow, SCENARIO_COLUMNS)
def _load_dataframe(stream: BinaryIO, filename: str) -> DataFrame:
stream.seek(0)
suffix = Path(filename).suffix.lower()
if suffix == ".csv":
df = pd.read_csv(stream, dtype=str,
keep_default_na=False, encoding="utf-8")
elif suffix in {".xls", ".xlsx"}:
df = pd.read_excel(stream, dtype=str, engine="openpyxl")
else:
raise UnsupportedImportFormat(
f"Unsupported file type: {suffix or 'unknown'}")
df.columns = [str(col).strip().lower() for col in df.columns]
return df
def _parse_dataframe(
df: DataFrame,
model: type[TImportRow],
expected_columns: Iterable[str],
) -> ImportResult[TImportRow]:
rows: list[ParsedImportRow[TImportRow]] = []
errors: list[ImportRowError] = []
for index, raw in enumerate(df.to_dict(orient="records"), start=2):
payload = _prepare_payload(
cast(dict[str, object], raw), expected_columns)
try:
rows.append(
ParsedImportRow(row_number=index, data=model(**payload))
)
except ValidationError as exc: # pragma: no cover - exercised via tests
for detail in exc.errors():
loc = ".".join(str(part)
for part in detail.get("loc", [])) or None
errors.append(
ImportRowError(
row_number=index,
field=loc,
message=detail.get("msg", "Invalid value"),
)
)
return ImportResult(rows=rows, errors=errors)
def _prepare_payload(
raw: dict[str, object], expected_columns: Iterable[str]
) -> dict[str, object | None]:
payload: dict[str, object | None] = {}
for column in expected_columns:
if column not in raw:
continue
value = raw.get(column)
if isinstance(value, str):
value = value.strip()
if value == "":
value = None
if value is not None and pd.isna(cast(Any, value)):
value = None
payload[column] = value
return payload
def _normalise_key(value: str) -> str:
return value.strip().lower()
def _build_staged_view(
staged: StagedImport[TImportRow],
) -> StagedImportView[TImportRow]:
rows = tuple(
StagedRowView(
row_number=row.parsed.row_number,
data=cast(TImportRow, _deep_copy_model(row.parsed.data)),
context=MappingProxyType(dict(row.context)),
)
for row in staged.rows
)
return StagedImportView(token=staged.token, rows=rows)
def _deep_copy_model(model: BaseModel) -> BaseModel:
copy_method = getattr(model, "model_copy", None)
if callable(copy_method): # pydantic v2
return cast(BaseModel, copy_method(deep=True))
return model.copy(deep=True) # type: ignore[attr-defined]
def _compile_row_issues(
preview_rows: Iterable[ImportPreviewRow[Any]],
parser_errors: Iterable[ImportRowError],
) -> list[ImportPreviewRowIssues]:
issue_map: dict[int, ImportPreviewRowIssues] = {}
def ensure_bundle(
row_number: int,
state: ImportPreviewState | None,
) -> ImportPreviewRowIssues:
bundle = issue_map.get(row_number)
if bundle is None:
bundle = ImportPreviewRowIssues(
row_number=row_number,
state=state,
issues=[],
)
issue_map[row_number] = bundle
else:
if _state_priority(state) > _state_priority(bundle.state):
bundle.state = state
return bundle
for row in preview_rows:
if not row.issues:
continue
bundle = ensure_bundle(row.row_number, row.state)
for message in row.issues:
bundle.issues.append(ImportPreviewRowIssue(message=message))
for error in parser_errors:
bundle = ensure_bundle(error.row_number, ImportPreviewState.ERROR)
bundle.issues.append(
ImportPreviewRowIssue(message=error.message, field=error.field)
)
return sorted(issue_map.values(), key=lambda item: item.row_number)
def _state_priority(state: ImportPreviewState | None) -> int:
if state is None:
return -1
if state == ImportPreviewState.ERROR:
return 3
if state == ImportPreviewState.SKIP:
return 2
if state == ImportPreviewState.UPDATE:
return 1
return 0

95
services/metrics.py Normal file
View File

@@ -0,0 +1,95 @@
from __future__ import annotations
import json
from datetime import datetime
from typing import Any, Dict, Optional
from sqlalchemy.orm import Session
from models.performance_metric import PerformanceMetric
class MetricsService:
def __init__(self, db: Session):
self.db = db
def store_metric(
self,
metric_name: str,
value: float,
labels: Optional[Dict[str, Any]] = None,
endpoint: Optional[str] = None,
method: Optional[str] = None,
status_code: Optional[int] = None,
duration_seconds: Optional[float] = None,
) -> PerformanceMetric:
"""Store a performance metric in the database."""
metric = PerformanceMetric(
timestamp=datetime.utcnow(),
metric_name=metric_name,
value=value,
labels=json.dumps(labels) if labels else None,
endpoint=endpoint,
method=method,
status_code=status_code,
duration_seconds=duration_seconds,
)
self.db.add(metric)
self.db.commit()
self.db.refresh(metric)
return metric
def get_metrics(
self,
metric_name: Optional[str] = None,
start_time: Optional[datetime] = None,
end_time: Optional[datetime] = None,
limit: int = 100,
) -> list[PerformanceMetric]:
"""Retrieve stored metrics with optional filtering."""
query = self.db.query(PerformanceMetric)
if metric_name:
query = query.filter(PerformanceMetric.metric_name == metric_name)
if start_time:
query = query.filter(PerformanceMetric.timestamp >= start_time)
if end_time:
query = query.filter(PerformanceMetric.timestamp <= end_time)
return query.order_by(PerformanceMetric.timestamp.desc()).limit(limit).all()
def get_aggregated_metrics(
self,
metric_name: str,
start_time: Optional[datetime] = None,
end_time: Optional[datetime] = None,
) -> Dict[str, Any]:
"""Get aggregated statistics for a metric."""
query = self.db.query(PerformanceMetric).filter(
PerformanceMetric.metric_name == metric_name
)
if start_time:
query = query.filter(PerformanceMetric.timestamp >= start_time)
if end_time:
query = query.filter(PerformanceMetric.timestamp <= end_time)
metrics = query.all()
if not metrics:
return {"count": 0, "avg": 0, "min": 0, "max": 0}
values = [m.value for m in metrics]
return {
"count": len(values),
"avg": sum(values) / len(values),
"min": min(values),
"max": max(values),
}
def get_metrics_service(db: Session) -> MetricsService:
return MetricsService(db)

203
services/navigation.py Normal file
View File

@@ -0,0 +1,203 @@
from __future__ import annotations
from dataclasses import dataclass, field
from typing import Iterable, List, Sequence
from fastapi import Request
from models.navigation import NavigationLink
from services.repositories import NavigationRepository
from services.session import AuthSession
@dataclass(slots=True)
class NavigationLinkDTO:
id: int
label: str
href: str
match_prefix: str | None
icon: str | None
tooltip: str | None
is_external: bool
children: List["NavigationLinkDTO"] = field(default_factory=list)
@dataclass(slots=True)
class NavigationGroupDTO:
id: int
label: str
icon: str | None
tooltip: str | None
links: List[NavigationLinkDTO] = field(default_factory=list)
@dataclass(slots=True)
class NavigationSidebarDTO:
groups: List[NavigationGroupDTO]
roles: tuple[str, ...]
class NavigationService:
"""Build navigation payloads filtered for the current session."""
def __init__(self, repository: NavigationRepository) -> None:
self._repository = repository
def build_sidebar(
self,
*,
session: AuthSession,
request: Request | None = None,
include_disabled: bool = False,
) -> NavigationSidebarDTO:
roles = self._collect_roles(session)
groups = self._repository.list_groups_with_links(
include_disabled=include_disabled
)
context = self._derive_context(request)
mapped_groups: List[NavigationGroupDTO] = []
for group in groups:
if not include_disabled and not group.is_enabled:
continue
mapped_links = self._map_links(
group.links,
roles,
request=request,
include_disabled=include_disabled,
context=context,
)
if not mapped_links and not include_disabled:
continue
mapped_groups.append(
NavigationGroupDTO(
id=group.id,
label=group.label,
icon=group.icon,
tooltip=group.tooltip,
links=mapped_links,
)
)
return NavigationSidebarDTO(groups=mapped_groups, roles=roles)
def _map_links(
self,
links: Sequence[NavigationLink],
roles: Iterable[str],
*,
request: Request | None,
include_disabled: bool,
context: dict[str, str | None],
include_children: bool = False,
) -> List[NavigationLinkDTO]:
resolved_roles = tuple(roles)
mapped: List[NavigationLinkDTO] = []
for link in sorted(links, key=lambda x: (x.sort_order, x.id)):
if not include_children and link.parent_link_id is not None:
continue
if not include_disabled and (not link.is_enabled):
continue
if not self._link_visible(link, resolved_roles, include_disabled):
continue
href = self._resolve_href(link, request=request, context=context)
if not href:
continue
children = self._map_links(
link.children,
resolved_roles,
request=request,
include_disabled=include_disabled,
context=context,
include_children=True,
)
match_prefix = link.match_prefix or href
mapped.append(
NavigationLinkDTO(
id=link.id,
label=link.label,
href=href,
match_prefix=match_prefix,
icon=link.icon,
tooltip=link.tooltip,
is_external=link.is_external,
children=children,
)
)
return mapped
@staticmethod
def _collect_roles(session: AuthSession) -> tuple[str, ...]:
roles = tuple((session.role_slugs or ()) if session else ())
if session and session.is_authenticated:
return roles
if "anonymous" in roles:
return roles
return roles + ("anonymous",)
@staticmethod
def _derive_context(request: Request | None) -> dict[str, str | None]:
if request is None:
return {"project_id": None, "scenario_id": None}
project_id = request.path_params.get(
"project_id") if hasattr(request, "path_params") else None
scenario_id = request.path_params.get(
"scenario_id") if hasattr(request, "path_params") else None
if not project_id:
project_id = request.query_params.get("project_id")
if not scenario_id:
scenario_id = request.query_params.get("scenario_id")
return {"project_id": project_id, "scenario_id": scenario_id}
def _resolve_href(
self,
link: NavigationLink,
*,
request: Request | None,
context: dict[str, str | None],
) -> str | None:
if link.route_name:
if request is None:
fallback = link.href_override
if fallback:
return fallback
# Fallback to route name when no request is available
return f"/{link.route_name.replace('.', '/')}"
requires_context = link.slug in {
"profitability",
"profitability-calculator",
"opex",
"capex",
}
if requires_context:
project_id = context.get("project_id")
scenario_id = context.get("scenario_id")
if project_id and scenario_id:
try:
return str(
request.url_for(
link.route_name,
project_id=project_id,
scenario_id=scenario_id,
)
)
except Exception: # pragma: no cover - defensive
pass
try:
return str(request.url_for(link.route_name))
except Exception: # pragma: no cover - defensive
return link.href_override
return link.href_override
@staticmethod
def _link_visible(
link: NavigationLink,
roles: Iterable[str],
include_disabled: bool,
) -> bool:
role_tuple = tuple(roles)
if not include_disabled and not link.is_enabled:
return False
if not link.required_roles:
return True
role_set = set(role_tuple)
return any(role in role_set for role in link.required_roles)

176
services/pricing.py Normal file
View File

@@ -0,0 +1,176 @@
"""Pricing service implementing commodity revenue calculations.
This module exposes data models and helpers for computing product pricing
according to the formulas outlined in
``calminer-docs/specifications/price_calculation.md``. It focuses on the core
calculation steps (payable metal, penalties, net revenue) and is intended to be
composed within broader scenario evaluation workflows.
"""
from __future__ import annotations
from dataclasses import dataclass, field
from typing import Mapping
from pydantic import BaseModel, Field, PositiveFloat, field_validator
from services.currency import require_currency
class PricingInput(BaseModel):
"""Normalized inputs for pricing calculations."""
metal: str = Field(..., min_length=1)
ore_tonnage: PositiveFloat = Field(
..., description="Total ore mass processed (metric tonnes)")
head_grade_pct: PositiveFloat = Field(..., gt=0,
le=100, description="Head grade as percent")
recovery_pct: PositiveFloat = Field(..., gt=0,
le=100, description="Recovery rate percent")
payable_pct: float | None = Field(
None, gt=0, le=100, description="Contractual payable percentage")
reference_price: PositiveFloat = Field(
..., description="Reference price in base currency per unit")
treatment_charge: float = Field(0, ge=0)
smelting_charge: float = Field(0, ge=0)
moisture_pct: float = Field(0, ge=0, le=100)
moisture_threshold_pct: float | None = Field(None, ge=0, le=100)
moisture_penalty_per_pct: float | None = Field(None)
impurity_ppm: Mapping[str, float] = Field(default_factory=dict)
impurity_thresholds: Mapping[str, float] = Field(default_factory=dict)
impurity_penalty_per_ppm: Mapping[str, float] = Field(default_factory=dict)
premiums: float = Field(0)
fx_rate: PositiveFloat = Field(
1, description="Multiplier to convert to scenario currency")
currency_code: str | None = Field(
None, description="Optional explicit currency override")
@field_validator("impurity_ppm", mode="before")
@classmethod
def _validate_impurity_mapping(cls, value):
if isinstance(value, Mapping):
return {k: float(v) for k, v in value.items()}
return value
class PricingResult(BaseModel):
"""Structured output summarising pricing computation results."""
metal: str
ore_tonnage: float
head_grade_pct: float
recovery_pct: float
payable_metal_tonnes: float
reference_price: float
gross_revenue: float
moisture_penalty: float
impurity_penalty: float
treatment_smelt_charges: float
premiums: float
net_revenue: float
currency: str | None
@dataclass(frozen=True)
class PricingMetadata:
"""Metadata defaults applied when explicit inputs are omitted."""
default_payable_pct: float = 100.0
default_currency: str | None = "USD"
moisture_threshold_pct: float = 8.0
moisture_penalty_per_pct: float = 0.0
impurity_thresholds: Mapping[str, float] = field(default_factory=dict)
impurity_penalty_per_ppm: Mapping[str, float] = field(default_factory=dict)
def calculate_pricing(
pricing_input: PricingInput,
*,
metadata: PricingMetadata | None = None,
currency: str | None = None,
) -> PricingResult:
"""Calculate pricing metrics for the provided commodity input.
Parameters
----------
pricing_input:
Normalised input data including ore tonnage, grades, charges, and
optional penalties.
metadata:
Optional default metadata applied when specific values are omitted from
``pricing_input``.
currency:
Optional override for the output currency label. Falls back to
``metadata.default_currency`` when not provided.
"""
applied_metadata = metadata or PricingMetadata()
payable_pct = (
pricing_input.payable_pct
if pricing_input.payable_pct is not None
else applied_metadata.default_payable_pct
)
moisture_threshold = (
pricing_input.moisture_threshold_pct
if pricing_input.moisture_threshold_pct is not None
else applied_metadata.moisture_threshold_pct
)
moisture_penalty_factor = (
pricing_input.moisture_penalty_per_pct
if pricing_input.moisture_penalty_per_pct is not None
else applied_metadata.moisture_penalty_per_pct
)
impurity_thresholds = {
**applied_metadata.impurity_thresholds,
**pricing_input.impurity_thresholds,
}
impurity_penalty_factors = {
**applied_metadata.impurity_penalty_per_ppm,
**pricing_input.impurity_penalty_per_ppm,
}
q_metal = pricing_input.ore_tonnage * (pricing_input.head_grade_pct / 100.0) * (
pricing_input.recovery_pct / 100.0
)
payable_metal = q_metal * (payable_pct / 100.0)
gross_revenue_ref = payable_metal * pricing_input.reference_price
charges = pricing_input.treatment_charge + pricing_input.smelting_charge
moisture_excess = max(0.0, pricing_input.moisture_pct - moisture_threshold)
moisture_penalty = moisture_excess * moisture_penalty_factor
impurity_penalty_total = 0.0
for impurity, value in pricing_input.impurity_ppm.items():
threshold = impurity_thresholds.get(impurity, 0.0)
penalty_factor = impurity_penalty_factors.get(impurity, 0.0)
impurity_penalty_total += max(0.0, value - threshold) * penalty_factor
net_revenue_ref = (
gross_revenue_ref - charges - moisture_penalty - impurity_penalty_total
)
net_revenue_ref += pricing_input.premiums
net_revenue = net_revenue_ref * pricing_input.fx_rate
currency_code = require_currency(
currency or pricing_input.currency_code,
default=applied_metadata.default_currency,
)
return PricingResult(
metal=pricing_input.metal,
ore_tonnage=pricing_input.ore_tonnage,
head_grade_pct=pricing_input.head_grade_pct,
recovery_pct=pricing_input.recovery_pct,
payable_metal_tonnes=payable_metal,
reference_price=pricing_input.reference_price,
gross_revenue=gross_revenue_ref,
moisture_penalty=moisture_penalty,
impurity_penalty=impurity_penalty_total,
treatment_smelt_charges=charges,
premiums=pricing_input.premiums,
net_revenue=net_revenue,
currency=currency_code,
)

875
services/reporting.py Normal file
View File

@@ -0,0 +1,875 @@
"""Reporting service layer aggregating deterministic and simulation metrics."""
from __future__ import annotations
from dataclasses import dataclass, field
from datetime import date
import math
from typing import Mapping, Sequence
from urllib.parse import urlencode
import plotly.graph_objects as go
import plotly.io as pio
from fastapi import Request
from models import FinancialCategory, Project, Scenario
from services.financial import (
CashFlow,
ConvergenceError,
PaybackNotReachedError,
internal_rate_of_return,
net_present_value,
payback_period,
)
from services.simulation import (
CashFlowSpec,
SimulationConfig,
SimulationMetric,
SimulationResult,
run_monte_carlo,
)
from services.unit_of_work import UnitOfWork
DEFAULT_DISCOUNT_RATE = 0.1
DEFAULT_ITERATIONS = 500
DEFAULT_PERCENTILES: tuple[float, float, float] = (5.0, 50.0, 95.0)
_COST_CATEGORY_SIGNS: Mapping[FinancialCategory, float] = {
FinancialCategory.REVENUE: 1.0,
FinancialCategory.CAPITAL_EXPENDITURE: -1.0,
FinancialCategory.OPERATING_EXPENDITURE: -1.0,
FinancialCategory.CONTINGENCY: -1.0,
FinancialCategory.OTHER: -1.0,
}
@dataclass(frozen=True)
class IncludeOptions:
"""Flags controlling optional sections in report payloads."""
distribution: bool = False
samples: bool = False
@dataclass(slots=True)
class ReportFilters:
"""Filter parameters applied when selecting scenarios for a report."""
scenario_ids: set[int] | None = None
start_date: date | None = None
end_date: date | None = None
def matches(self, scenario: Scenario) -> bool:
if self.scenario_ids is not None and scenario.id not in self.scenario_ids:
return False
if self.start_date and scenario.start_date and scenario.start_date < self.start_date:
return False
if self.end_date and scenario.end_date and scenario.end_date > self.end_date:
return False
return True
def to_dict(self) -> dict[str, object]:
payload: dict[str, object] = {}
if self.scenario_ids is not None:
payload["scenario_ids"] = sorted(self.scenario_ids)
if self.start_date is not None:
payload["start_date"] = self.start_date
if self.end_date is not None:
payload["end_date"] = self.end_date
return payload
@dataclass(slots=True)
class ScenarioFinancialTotals:
currency: str | None
inflows: float
outflows: float
net: float
by_category: dict[str, float]
def to_dict(self) -> dict[str, object]:
return {
"currency": self.currency,
"inflows": _round_optional(self.inflows),
"outflows": _round_optional(self.outflows),
"net": _round_optional(self.net),
"by_category": {
key: _round_optional(value) for key, value in sorted(self.by_category.items())
},
}
@dataclass(slots=True)
class ScenarioDeterministicMetrics:
currency: str | None
discount_rate: float
compounds_per_year: int
npv: float | None
irr: float | None
payback_period: float | None
notes: list[str] = field(default_factory=list)
def to_dict(self) -> dict[str, object]:
return {
"currency": self.currency,
"discount_rate": _round_optional(self.discount_rate, digits=4),
"compounds_per_year": self.compounds_per_year,
"npv": _round_optional(self.npv),
"irr": _round_optional(self.irr, digits=6),
"payback_period": _round_optional(self.payback_period, digits=4),
"notes": self.notes,
}
@dataclass(slots=True)
class ScenarioMonteCarloResult:
available: bool
notes: list[str] = field(default_factory=list)
result: SimulationResult | None = None
include_samples: bool = False
def to_dict(self) -> dict[str, object]:
if not self.available or self.result is None:
return {
"available": False,
"notes": self.notes,
}
metrics: dict[str, dict[str, object]] = {}
for metric, summary in self.result.summaries.items():
metrics[metric.value] = {
"mean": _round_optional(summary.mean),
"std_dev": _round_optional(summary.std_dev),
"minimum": _round_optional(summary.minimum),
"maximum": _round_optional(summary.maximum),
"percentiles": {
f"{percentile:g}": _round_optional(value)
for percentile, value in sorted(summary.percentiles.items())
},
"sample_size": summary.sample_size,
"failed_runs": summary.failed_runs,
}
samples_payload: dict[str, list[float | None]] | None = None
if self.include_samples and self.result.samples:
samples_payload = {}
for metric, samples in self.result.samples.items():
samples_payload[metric.value] = [
_sanitize_float(sample) for sample in samples.tolist()
]
payload: dict[str, object] = {
"available": True,
"iterations": self.result.iterations,
"metrics": metrics,
"notes": self.notes,
}
if samples_payload:
payload["samples"] = samples_payload
return payload
@dataclass(slots=True)
class ScenarioReport:
scenario: Scenario
totals: ScenarioFinancialTotals
deterministic: ScenarioDeterministicMetrics
monte_carlo: ScenarioMonteCarloResult | None
def to_dict(self) -> dict[str, object]:
scenario_info = {
"id": self.scenario.id,
"project_id": self.scenario.project_id,
"name": self.scenario.name,
"description": self.scenario.description,
"status": self.scenario.status.value if hasattr(self.scenario.status, 'value') else self.scenario.status,
"start_date": self.scenario.start_date,
"end_date": self.scenario.end_date,
"currency": self.scenario.currency,
"primary_resource": self.scenario.primary_resource.value
if self.scenario.primary_resource and hasattr(self.scenario.primary_resource, 'value')
else self.scenario.primary_resource,
"discount_rate": _round_optional(self.deterministic.discount_rate, digits=4),
"created_at": self.scenario.created_at,
"updated_at": self.scenario.updated_at,
"simulation_parameter_count": len(self.scenario.simulation_parameters or []),
}
payload: dict[str, object] = {
"scenario": scenario_info,
"financials": self.totals.to_dict(),
"metrics": self.deterministic.to_dict(),
}
if self.monte_carlo is not None:
payload["monte_carlo"] = self.monte_carlo.to_dict()
return payload
@dataclass(slots=True)
class AggregatedMetric:
average: float | None
minimum: float | None
maximum: float | None
def to_dict(self) -> dict[str, object]:
return {
"average": _round_optional(self.average),
"minimum": _round_optional(self.minimum),
"maximum": _round_optional(self.maximum),
}
@dataclass(slots=True)
class ProjectAggregates:
total_inflows: float
total_outflows: float
total_net: float
deterministic_metrics: dict[str, AggregatedMetric]
def to_dict(self) -> dict[str, object]:
return {
"financials": {
"total_inflows": _round_optional(self.total_inflows),
"total_outflows": _round_optional(self.total_outflows),
"total_net": _round_optional(self.total_net),
},
"deterministic_metrics": {
metric: data.to_dict()
for metric, data in sorted(self.deterministic_metrics.items())
},
}
@dataclass(slots=True)
class MetricComparison:
metric: str
direction: str
best: tuple[int, str, float] | None
worst: tuple[int, str, float] | None
average: float | None
def to_dict(self) -> dict[str, object]:
return {
"metric": self.metric,
"direction": self.direction,
"best": _comparison_entry(self.best),
"worst": _comparison_entry(self.worst),
"average": _round_optional(self.average),
}
def parse_include_tokens(raw: str | None) -> IncludeOptions:
tokens: set[str] = set()
if raw:
for part in raw.split(","):
token = part.strip().lower()
if token:
tokens.add(token)
if "all" in tokens:
return IncludeOptions(distribution=True, samples=True)
return IncludeOptions(
distribution=bool({"distribution", "monte_carlo", "mc"} & tokens),
samples="samples" in tokens,
)
def validate_percentiles(values: Sequence[float] | None) -> tuple[float, ...]:
if not values:
return DEFAULT_PERCENTILES
seen: set[float] = set()
cleaned: list[float] = []
for value in values:
percentile = float(value)
if percentile < 0.0 or percentile > 100.0:
raise ValueError("Percentiles must be between 0 and 100.")
if percentile not in seen:
seen.add(percentile)
cleaned.append(percentile)
if not cleaned:
return DEFAULT_PERCENTILES
return tuple(cleaned)
class ReportingService:
"""Coordinates project and scenario reporting aggregation."""
def __init__(self, uow: UnitOfWork) -> None:
self._uow = uow
def project_summary(
self,
project: Project,
*,
filters: ReportFilters,
include: IncludeOptions,
iterations: int,
percentiles: tuple[float, ...],
) -> dict[str, object]:
scenarios = self._load_scenarios(project.id, filters)
reports = [
self._build_scenario_report(
scenario,
include_distribution=include.distribution,
include_samples=include.samples,
iterations=iterations,
percentiles=percentiles,
)
for scenario in scenarios
]
aggregates = self._aggregate_project(reports)
return {
"project": _project_payload(project),
"scenario_count": len(reports),
"filters": filters.to_dict(),
"aggregates": aggregates.to_dict(),
"scenarios": [report.to_dict() for report in reports],
}
def scenario_comparison(
self,
project: Project,
scenarios: Sequence[Scenario],
*,
include: IncludeOptions,
iterations: int,
percentiles: tuple[float, ...],
) -> dict[str, object]:
reports = [
self._build_scenario_report(
self._reload_scenario(scenario.id),
include_distribution=include.distribution,
include_samples=include.samples,
iterations=iterations,
percentiles=percentiles,
)
for scenario in scenarios
]
comparison = {
metric: data.to_dict()
for metric, data in self._build_comparisons(reports).items()
}
return {
"project": _project_payload(project),
"scenarios": [report.to_dict() for report in reports],
"comparison": comparison,
}
def scenario_distribution(
self,
scenario: Scenario,
*,
include: IncludeOptions,
iterations: int,
percentiles: tuple[float, ...],
) -> dict[str, object]:
report = self._build_scenario_report(
self._reload_scenario(scenario.id),
include_distribution=True,
include_samples=include.samples,
iterations=iterations,
percentiles=percentiles,
)
return {
"scenario": report.to_dict()["scenario"],
"summary": report.totals.to_dict(),
"metrics": report.deterministic.to_dict(),
"monte_carlo": (
report.monte_carlo.to_dict() if report.monte_carlo else {
"available": False}
),
}
def _load_scenarios(self, project_id: int, filters: ReportFilters) -> list[Scenario]:
scenarios = self._uow.scenarios.list_for_project(
project_id, with_children=True)
return [scenario for scenario in scenarios if filters.matches(scenario)]
def _reload_scenario(self, scenario_id: int) -> Scenario:
return self._uow.scenarios.get(scenario_id, with_children=True)
def _build_scenario_report(
self,
scenario: Scenario,
*,
include_distribution: bool,
include_samples: bool,
iterations: int,
percentiles: tuple[float, ...],
) -> ScenarioReport:
cash_flows, totals = _build_cash_flows(scenario)
deterministic = _calculate_deterministic_metrics(
scenario, cash_flows, totals)
monte_carlo: ScenarioMonteCarloResult | None = None
if include_distribution:
monte_carlo = _run_monte_carlo(
scenario,
cash_flows,
include_samples=include_samples,
iterations=iterations,
percentiles=percentiles,
)
return ScenarioReport(
scenario=scenario,
totals=totals,
deterministic=deterministic,
monte_carlo=monte_carlo,
)
def _aggregate_project(self, reports: Sequence[ScenarioReport]) -> ProjectAggregates:
total_inflows = sum(report.totals.inflows for report in reports)
total_outflows = sum(report.totals.outflows for report in reports)
total_net = sum(report.totals.net for report in reports)
metrics: dict[str, AggregatedMetric] = {}
for metric_name in ("npv", "irr", "payback_period"):
values = [
getattr(report.deterministic, metric_name)
for report in reports
if getattr(report.deterministic, metric_name) is not None
]
if values:
metrics[metric_name] = AggregatedMetric(
average=sum(values) / len(values),
minimum=min(values),
maximum=max(values),
)
return ProjectAggregates(
total_inflows=total_inflows,
total_outflows=total_outflows,
total_net=total_net,
deterministic_metrics=metrics,
)
def _build_comparisons(
self, reports: Sequence[ScenarioReport]
) -> Mapping[str, MetricComparison]:
comparisons: dict[str, MetricComparison] = {}
for metric_name, direction in (
("npv", "higher_is_better"),
("irr", "higher_is_better"),
("payback_period", "lower_is_better"),
):
entries: list[tuple[int, str, float]] = []
for report in reports:
value = getattr(report.deterministic, metric_name)
if value is None:
continue
entries.append(
(report.scenario.id, report.scenario.name, value))
if not entries:
continue
if direction == "higher_is_better":
best = max(entries, key=lambda item: item[2])
worst = min(entries, key=lambda item: item[2])
else:
best = min(entries, key=lambda item: item[2])
worst = max(entries, key=lambda item: item[2])
average = sum(item[2] for item in entries) / len(entries)
comparisons[metric_name] = MetricComparison(
metric=metric_name,
direction=direction,
best=best,
worst=worst,
average=average,
)
return comparisons
def build_project_summary_context(
self,
project: Project,
filters: ReportFilters,
include: IncludeOptions,
iterations: int,
percentiles: tuple[float, ...],
request: Request,
) -> dict[str, object]:
"""Build template context for project summary page."""
scenarios = self._load_scenarios(project.id, filters)
reports = [
self._build_scenario_report(
scenario,
include_distribution=include.distribution,
include_samples=include.samples,
iterations=iterations,
percentiles=percentiles,
)
for scenario in scenarios
]
aggregates = self._aggregate_project(reports)
return {
"request": request,
"project": _project_payload(project),
"scenario_count": len(reports),
"aggregates": aggregates.to_dict(),
"scenarios": [report.to_dict() for report in reports],
"filters": filters.to_dict(),
"include_options": include,
"iterations": iterations,
"percentiles": percentiles,
"title": f"Project Summary · {project.name}",
"subtitle": "Aggregated financial and simulation insights across scenarios.",
"actions": [
{
"href": request.url_for(
"reports.project_summary",
project_id=project.id,
),
"label": "Download JSON",
}
],
"chart_data": self._generate_npv_comparison_chart(reports),
}
def build_scenario_comparison_context(
self,
project: Project,
scenarios: Sequence[Scenario],
include: IncludeOptions,
iterations: int,
percentiles: tuple[float, ...],
request: Request,
) -> dict[str, object]:
"""Build template context for scenario comparison page."""
reports = [
self._build_scenario_report(
self._reload_scenario(scenario.id),
include_distribution=include.distribution,
include_samples=include.samples,
iterations=iterations,
percentiles=percentiles,
)
for scenario in scenarios
]
comparison = {
metric: data.to_dict()
for metric, data in self._build_comparisons(reports).items()
}
comparison_json_url = request.url_for(
"reports.project_scenario_comparison",
project_id=project.id,
)
scenario_ids = [str(s.id) for s in scenarios]
comparison_query = urlencode(
[("scenario_ids", str(identifier)) for identifier in scenario_ids]
)
if comparison_query:
comparison_json_url = f"{comparison_json_url}?{comparison_query}"
return {
"request": request,
"project": _project_payload(project),
"scenarios": [report.to_dict() for report in reports],
"comparison": comparison,
"include_options": include,
"iterations": iterations,
"percentiles": percentiles,
"title": f"Scenario Comparison · {project.name}",
"subtitle": "Evaluate deterministic metrics and Monte Carlo trends side by side.",
"actions": [
{
"href": comparison_json_url,
"label": "Download JSON",
}
],
}
def build_scenario_distribution_context(
self,
scenario: Scenario,
include: IncludeOptions,
iterations: int,
percentiles: tuple[float, ...],
request: Request,
) -> dict[str, object]:
"""Build template context for scenario distribution page."""
report = self._build_scenario_report(
self._reload_scenario(scenario.id),
include_distribution=True,
include_samples=include.samples,
iterations=iterations,
percentiles=percentiles,
)
return {
"request": request,
"scenario": report.to_dict()["scenario"],
"summary": report.totals.to_dict(),
"metrics": report.deterministic.to_dict(),
"monte_carlo": (
report.monte_carlo.to_dict() if report.monte_carlo else {
"available": False}
),
"include_options": include,
"iterations": iterations,
"percentiles": percentiles,
"title": f"Scenario Distribution · {scenario.name}",
"subtitle": "Deterministic and simulated distributions for a single scenario.",
"actions": [
{
"href": request.url_for(
"reports.scenario_distribution",
scenario_id=scenario.id,
),
"label": "Download JSON",
}
],
"chart_data": self._generate_distribution_histogram(report.monte_carlo) if report.monte_carlo else "{}",
}
def _generate_npv_comparison_chart(self, reports: Sequence[ScenarioReport]) -> str:
"""Generate Plotly chart JSON for NPV comparison across scenarios."""
scenario_names = []
npv_values = []
for report in reports:
scenario_names.append(report.scenario.name)
npv_values.append(report.deterministic.npv or 0)
fig = go.Figure(data=[
go.Bar(
x=scenario_names,
y=npv_values,
name='NPV',
marker_color='lightblue'
)
])
fig.update_layout(
title="NPV Comparison Across Scenarios",
xaxis_title="Scenario",
yaxis_title="NPV",
showlegend=False
)
return pio.to_json(fig) or "{}"
def _generate_distribution_histogram(self, monte_carlo: ScenarioMonteCarloResult) -> str:
"""Generate Plotly histogram for Monte Carlo distribution."""
if not monte_carlo.available or not monte_carlo.result or not monte_carlo.result.samples:
return "{}"
# Get NPV samples
npv_samples = monte_carlo.result.samples.get(SimulationMetric.NPV, [])
if len(npv_samples) == 0:
return "{}"
fig = go.Figure(data=[
go.Histogram(
x=npv_samples,
nbinsx=50,
name='NPV Distribution',
marker_color='lightgreen'
)
])
fig.update_layout(
title="Monte Carlo NPV Distribution",
xaxis_title="NPV",
yaxis_title="Frequency",
showlegend=False
)
return pio.to_json(fig) or "{}"
def _build_cash_flows(scenario: Scenario) -> tuple[list[CashFlow], ScenarioFinancialTotals]:
cash_flows: list[CashFlow] = []
by_category: dict[str, float] = {}
inflows = 0.0
outflows = 0.0
net = 0.0
period_index = 0
for financial_input in scenario.financial_inputs or []:
sign = _COST_CATEGORY_SIGNS.get(financial_input.category, -1.0)
amount = float(financial_input.amount) * sign
net += amount
if amount >= 0:
inflows += amount
else:
outflows += -amount
by_category.setdefault(financial_input.category.value, 0.0)
by_category[financial_input.category.value] += amount
if financial_input.effective_date is not None:
cash_flows.append(
CashFlow(amount=amount, date=financial_input.effective_date)
)
else:
cash_flows.append(
CashFlow(amount=amount, period_index=period_index))
period_index += 1
currency = scenario.currency
if currency is None and scenario.financial_inputs:
currency = scenario.financial_inputs[0].currency
totals = ScenarioFinancialTotals(
currency=currency,
inflows=inflows,
outflows=outflows,
net=net,
by_category=by_category,
)
return cash_flows, totals
def _calculate_deterministic_metrics(
scenario: Scenario,
cash_flows: Sequence[CashFlow],
totals: ScenarioFinancialTotals,
) -> ScenarioDeterministicMetrics:
notes: list[str] = []
discount_rate = _normalise_discount_rate(scenario.discount_rate)
if scenario.discount_rate is None:
notes.append(
f"Discount rate not set; defaulted to {discount_rate:.2%}."
)
if not cash_flows:
notes.append(
"No financial inputs available for deterministic metrics.")
return ScenarioDeterministicMetrics(
currency=totals.currency,
discount_rate=discount_rate,
compounds_per_year=1,
npv=None,
irr=None,
payback_period=None,
notes=notes,
)
npv_value: float | None
try:
npv_value = net_present_value(
discount_rate,
cash_flows,
compounds_per_year=1,
)
except ValueError as exc:
npv_value = None
notes.append(f"NPV unavailable: {exc}.")
irr_value: float | None
try:
irr_value = internal_rate_of_return(
cash_flows,
compounds_per_year=1,
)
except (ValueError, ConvergenceError) as exc:
irr_value = None
notes.append(f"IRR unavailable: {exc}.")
payback_value: float | None
try:
payback_value = payback_period(
cash_flows,
compounds_per_year=1,
)
except (ValueError, PaybackNotReachedError) as exc:
payback_value = None
notes.append(f"Payback period unavailable: {exc}.")
return ScenarioDeterministicMetrics(
currency=totals.currency,
discount_rate=discount_rate,
compounds_per_year=1,
npv=npv_value,
irr=irr_value,
payback_period=payback_value,
notes=notes,
)
def _run_monte_carlo(
scenario: Scenario,
cash_flows: Sequence[CashFlow],
*,
include_samples: bool,
iterations: int,
percentiles: tuple[float, ...],
) -> ScenarioMonteCarloResult:
if not cash_flows:
return ScenarioMonteCarloResult(
available=False,
notes=["No financial inputs available for Monte Carlo simulation."],
)
discount_rate = _normalise_discount_rate(scenario.discount_rate)
specs = [CashFlowSpec(cash_flow=flow) for flow in cash_flows]
notes: list[str] = []
if not scenario.simulation_parameters:
notes.append(
"Scenario has no stochastic parameters; simulation mirrors deterministic cash flows."
)
config = SimulationConfig(
iterations=iterations,
discount_rate=discount_rate,
metrics=(
SimulationMetric.NPV,
SimulationMetric.IRR,
SimulationMetric.PAYBACK,
),
percentiles=percentiles,
return_samples=include_samples,
)
try:
result = run_monte_carlo(specs, config)
except Exception as exc: # pragma: no cover - safeguard for unexpected failures
notes.append(f"Simulation failed: {exc}.")
return ScenarioMonteCarloResult(available=False, notes=notes)
return ScenarioMonteCarloResult(
available=True,
notes=notes,
result=result,
include_samples=include_samples,
)
def _normalise_discount_rate(value: float | None) -> float:
if value is None:
return DEFAULT_DISCOUNT_RATE
rate = float(value)
if rate > 1.0:
return rate / 100.0
return rate
def _sanitize_float(value: float | None) -> float | None:
if value is None:
return None
if math.isnan(value) or math.isinf(value):
return None
return float(value)
def _round_optional(value: float | None, *, digits: int = 2) -> float | None:
clean = _sanitize_float(value)
if clean is None:
return None
return round(clean, digits)
def _comparison_entry(entry: tuple[int, str, float] | None) -> dict[str, object] | None:
if entry is None:
return None
scenario_id, name, value = entry
return {
"scenario_id": scenario_id,
"name": name,
"value": _round_optional(value),
}
def _project_payload(project: Project) -> dict[str, object]:
return {
"id": project.id,
"name": project.name,
"location": project.location,
"operation_type": project.operation_type.value,
"description": project.description,
"created_at": project.created_at,
"updated_at": project.updated_at,
}

1268
services/repositories.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,54 @@
"""Scenario evaluation services including pricing integration."""
from __future__ import annotations
from dataclasses import dataclass
from typing import Iterable
from models.scenario import Scenario
from services.pricing import (
PricingInput,
PricingMetadata,
PricingResult,
calculate_pricing,
)
@dataclass(slots=True)
class ScenarioPricingConfig:
"""Configuration for pricing evaluation within a scenario."""
metadata: PricingMetadata | None = None
@dataclass(slots=True)
class ScenarioPricingSnapshot:
"""Captured pricing results for a scenario."""
scenario_id: int
results: list[PricingResult]
class ScenarioPricingEvaluator:
"""Evaluate scenario profitability inputs using pricing services."""
def __init__(self, config: ScenarioPricingConfig | None = None) -> None:
self._config = config or ScenarioPricingConfig()
def evaluate(
self,
scenario: Scenario,
*,
inputs: Iterable[PricingInput],
metadata_override: PricingMetadata | None = None,
) -> ScenarioPricingSnapshot:
metadata = metadata_override or self._config.metadata
results: list[PricingResult] = []
for pricing_input in inputs:
result = calculate_pricing(
pricing_input,
metadata=metadata,
currency=scenario.currency,
)
results.append(result)
return ScenarioPricingSnapshot(scenario_id=scenario.id, results=results)

View File

@@ -0,0 +1,106 @@
from __future__ import annotations
from dataclasses import dataclass
from datetime import date
from typing import Iterable, Sequence
from models import Scenario, ScenarioStatus
from services.exceptions import ScenarioValidationError
ALLOWED_STATUSES: frozenset[ScenarioStatus] = frozenset(
{ScenarioStatus.DRAFT, ScenarioStatus.ACTIVE}
)
@dataclass(frozen=True)
class _ValidationContext:
scenarios: Sequence[Scenario]
@property
def scenario_ids(self) -> list[int]:
return [scenario.id for scenario in self.scenarios if scenario.id is not None]
class ScenarioComparisonValidator:
"""Validates scenarios prior to comparison workflows."""
def validate(self, scenarios: Sequence[Scenario] | Iterable[Scenario]) -> None:
scenario_list = list(scenarios)
if len(scenario_list) < 2:
# Nothing to validate when fewer than two scenarios are provided.
return
context = _ValidationContext(scenario_list)
self._ensure_same_project(context)
self._ensure_allowed_status(context)
self._ensure_shared_currency(context)
self._ensure_timeline_overlap(context)
self._ensure_shared_primary_resource(context)
def _ensure_same_project(self, context: _ValidationContext) -> None:
project_ids = {scenario.project_id for scenario in context.scenarios}
if len(project_ids) > 1:
raise ScenarioValidationError(
code="SCENARIO_PROJECT_MISMATCH",
message="Selected scenarios do not belong to the same project.",
scenario_ids=context.scenario_ids,
)
def _ensure_allowed_status(self, context: _ValidationContext) -> None:
disallowed = [
scenario
for scenario in context.scenarios
if scenario.status not in ALLOWED_STATUSES
]
if disallowed:
raise ScenarioValidationError(
code="SCENARIO_STATUS_INVALID",
message="Archived scenarios cannot be compared.",
scenario_ids=[
scenario.id for scenario in disallowed if scenario.id is not None],
)
def _ensure_shared_currency(self, context: _ValidationContext) -> None:
currencies = {
scenario.currency
for scenario in context.scenarios
if scenario.currency is not None
}
if len(currencies) > 1:
raise ScenarioValidationError(
code="SCENARIO_CURRENCY_MISMATCH",
message="Scenarios use different currencies and cannot be compared.",
scenario_ids=context.scenario_ids,
)
def _ensure_timeline_overlap(self, context: _ValidationContext) -> None:
ranges = [
(scenario.start_date, scenario.end_date)
for scenario in context.scenarios
if scenario.start_date and scenario.end_date
]
if len(ranges) < 2:
return
latest_start: date = max(start for start, _ in ranges)
earliest_end: date = min(end for _, end in ranges)
if latest_start > earliest_end:
raise ScenarioValidationError(
code="SCENARIO_TIMELINE_DISJOINT",
message="Scenario timelines do not overlap; adjust the comparison window.",
scenario_ids=context.scenario_ids,
)
def _ensure_shared_primary_resource(self, context: _ValidationContext) -> None:
resources = {
scenario.primary_resource
for scenario in context.scenarios
if scenario.primary_resource is not None
}
if len(resources) > 1:
raise ScenarioValidationError(
code="SCENARIO_RESOURCE_MISMATCH",
message="Scenarios target different primary resources and cannot be compared.",
scenario_ids=context.scenario_ids,
)

Some files were not shown because too many files have changed in this diff Show More