feat: Add currency management feature with CRUD operations
Some checks failed
Run Tests / test (push) Failing after 5m2s
Some checks failed
Run Tests / test (push) Failing after 5m2s
- Introduced a new template for currency overview and management (`currencies.html`). - Updated footer to include attribution to AllYouCanGET. - Added "Currencies" link to the main navigation header. - Implemented end-to-end tests for currency creation, update, and activation toggling. - Created unit tests for currency API endpoints, including creation, updating, and activation toggling. - Added a fixture to seed default currencies for testing. - Enhanced database setup tests to ensure proper seeding and migration handling.
This commit is contained in:
@@ -10,22 +10,58 @@ status: skeleton
|
||||
|
||||
> e.g., choice of FastAPI, PostgreSQL, SQLAlchemy, Chart.js, Jinja2 templates.
|
||||
|
||||
The architecture of CalMiner is influenced by several technical constraints that shape its design and implementation:
|
||||
|
||||
1. **Framework Selection**: The choice of FastAPI as the web framework imposes constraints on how the application handles requests, routing, and middleware. FastAPI's asynchronous capabilities must be leveraged appropriately to ensure optimal performance.
|
||||
2. **Database Technology**: The use of PostgreSQL as the primary database system dictates the data modeling, querying capabilities, and transaction management strategies. SQLAlchemy ORM is used for database interactions, which requires adherence to its conventions and limitations.
|
||||
3. **Frontend Technologies**: The decision to use Jinja2 for server-side templating and Chart.js for data visualization influences the structure of the frontend code and the way dynamic content is rendered.
|
||||
4. **Simulation Logic**: The Monte Carlo simulation logic must be designed to efficiently handle large datasets and perform computations within the constraints of the chosen programming language (Python) and its libraries.
|
||||
|
||||
## Organizational Constraints
|
||||
|
||||
> e.g., team skillsets, development workflows, CI/CD pipelines.
|
||||
|
||||
Restrictions arising from organizational factors include:
|
||||
|
||||
1. **Team Expertise**: The development team’s familiarity with FastAPI, SQLAlchemy, and frontend technologies like Jinja2 and Chart.js influences the architecture choices to ensure maintainability and ease of development.
|
||||
2. **Development Processes**: The adoption of Agile methodologies and CI/CD pipelines (using Gitea Actions) shapes the architecture to support continuous integration, automated testing, and deployment practices.
|
||||
3. **Collaboration Tools**: The use of specific collaboration and version control tools (e.g., Gitea) affects how code is managed, reviewed, and integrated, impacting the overall architecture and development workflow.
|
||||
4. **Documentation Standards**: The requirement for comprehensive documentation (as seen in the `docs/` folder) necessitates an architecture that is well-structured and easy to understand for both current and future team members.
|
||||
5. **Knowledge Sharing**: The need for effective knowledge sharing and onboarding processes influences the architecture to ensure that it is accessible and understandable for new team members.
|
||||
6. **Resource Availability**: The availability of hardware, software, and human resources within the organization can impose constraints on the architecture, affecting decisions related to scalability, performance, and feature implementation.
|
||||
|
||||
## Regulatory Constraints
|
||||
|
||||
> e.g., data privacy laws, industry standards.
|
||||
|
||||
Regulatory constraints that impact the architecture of CalMiner include:
|
||||
|
||||
1. **Data Privacy Compliance**: The architecture must ensure compliance with data privacy regulations such as GDPR or CCPA, which may dictate how user data is collected, stored, and processed.
|
||||
2. **Industry Standards**: Adherence to industry-specific standards and best practices may influence the design of data models, security measures, and reporting functionalities.
|
||||
3. **Auditability**: The system may need to incorporate logging and auditing features to meet regulatory requirements, affecting the architecture of data storage and access controls.
|
||||
4. **Data Retention Policies**: Regulatory requirements regarding data retention and deletion may impose constraints on how long certain types of data can be stored, influencing database design and data lifecycle management.
|
||||
5. **Security Standards**: Compliance with security standards (e.g., ISO/IEC 27001) may necessitate the implementation of specific security measures, such as encryption, access controls, and vulnerability management, which impact the overall architecture.
|
||||
|
||||
## Environmental Constraints
|
||||
|
||||
> e.g., deployment environments, cloud provider limitations.
|
||||
|
||||
Environmental constraints affecting the architecture include:
|
||||
|
||||
1. **Deployment Environments**: The architecture must accommodate various deployment environments (development, testing, production) with differing configurations and resource allocations.
|
||||
2. **Cloud Provider Limitations**: If deployed on a specific cloud provider, the architecture may need to align with the provider's services, limitations, and best practices, such as using managed databases or specific container orchestration tools.
|
||||
3. **Containerization**: The use of Docker for containerization imposes constraints on how the application is packaged, deployed, and scaled, influencing the architecture to ensure compatibility with container orchestration platforms.
|
||||
4. **Scalability Requirements**: The architecture must be designed to scale efficiently based on anticipated load and usage patterns, considering the limitations of the chosen infrastructure.
|
||||
|
||||
## Performance Constraints
|
||||
|
||||
> e.g., response time requirements, scalability needs.
|
||||
|
||||
Current performance constraints include:
|
||||
|
||||
1. **Response Time Requirements**: The architecture must ensure that the system can respond to user requests within a specified time frame, which may impact design decisions related to caching, database queries, and API performance.
|
||||
2. **Scalability Needs**: The system should be able to handle increased load and user traffic without significant degradation in performance, necessitating a scalable architecture that can grow with demand.
|
||||
|
||||
## Security Constraints
|
||||
|
||||
> e.g., authentication mechanisms, data encryption standards.
|
||||
|
||||
@@ -36,3 +36,22 @@ The architecture encompasses the following key areas:
|
||||
10. **Integration Points**: Interfaces for integrating with external systems and services.
|
||||
11. **Monitoring and Logging**: Systems for tracking system performance and user activity.
|
||||
12. **Maintenance and Support**: Processes for ongoing system maintenance and user support.
|
||||
|
||||
## Diagram
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant PM as Project Manager
|
||||
participant DA as Data Analyst
|
||||
participant EX as Executive
|
||||
participant CM as CalMiner System
|
||||
|
||||
PM->>CM: Create and manage scenarios
|
||||
DA->>CM: Analyze simulation results
|
||||
EX->>CM: Review reports and dashboards
|
||||
CM->>PM: Provide scenario planning tools
|
||||
CM->>DA: Deliver analysis insights
|
||||
CM->>EX: Generate high-level reports
|
||||
```
|
||||
|
||||
This diagram illustrates the key components of the CalMiner system and their interactions with external actors.
|
||||
|
||||
31
docs/idempotency_audit.md
Normal file
31
docs/idempotency_audit.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# Setup Script Idempotency Audit (2025-10-25)
|
||||
|
||||
This note captures the current evaluation of idempotent behaviour for `scripts/setup_database.py` and outlines follow-up actions.
|
||||
|
||||
## Admin Tasks
|
||||
|
||||
- **ensure_database**: guarded by `SELECT 1 FROM pg_database`; re-runs safely. Failure mode: network issues or lack of privileges surface as psycopg2 errors without additional context.
|
||||
- **ensure_role**: checks `pg_roles`, creates role if missing, reapplies grants each time. Subsequent runs execute grants again but PostgreSQL tolerates repeated grants.
|
||||
- **ensure_schema**: uses `information_schema` guard and respects `--dry-run`; idempotent when schema is `public` or already present.
|
||||
|
||||
## Application Tasks
|
||||
|
||||
- **initialize_schema**: relies on SQLAlchemy `create_all(checkfirst=True)`; repeatable. Dry-run output remains descriptive.
|
||||
- **run_migrations**: new baseline workflow applies `000_base.sql` once and records legacy scripts as applied. Subsequent runs detect the baseline in `schema_migrations` and skip reapplication.
|
||||
|
||||
## Seeding
|
||||
|
||||
- `seed_baseline_data` seeds currencies and measurement units with upsert logic. Verification now raises on missing data, preventing silent failures.
|
||||
- Running `--seed-data` repeatedly performs `ON CONFLICT` updates, making the operation safe.
|
||||
|
||||
## Outstanding Risks
|
||||
|
||||
1. Baseline migration relies on legacy files being present when first executed; if removed beforehand, old entries are never marked. (Low risk given repository state.)
|
||||
2. `ensure_database` and `ensure_role` do not wrap SQL execution errors with additional context beyond psycopg2 messages.
|
||||
3. Baseline verification assumes migrations and seeding run in the same process; manual runs of `scripts/seed_data.py` without the baseline could still fail.
|
||||
|
||||
## Recommended Actions
|
||||
|
||||
- Add regression tests ensuring repeated executions of key CLI paths (`--run-migrations`, `--seed-data`) result in no-op behaviour after the first run.
|
||||
- Extend logging/error handling for admin operations to provide clearer messages on repeated failures.
|
||||
- Consider a preflight check when migrations directory lacks legacy files but baseline is pending, warning about potential drift.
|
||||
29
docs/logging_audit.md
Normal file
29
docs/logging_audit.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Setup Script Logging Audit (2025-10-25)
|
||||
|
||||
The following observations capture current logging behaviour in `scripts/setup_database.py` and highlight areas requiring improved error handling and messaging.
|
||||
|
||||
## Connection Validation
|
||||
|
||||
- `validate_admin_connection` and `validate_application_connection` log entry/exit messages and raise `RuntimeError` with context if connection fails. This coverage is sufficient.
|
||||
- `ensure_database` logs creation states but does not surface connection or SQL exceptions beyond the initial connection acquisition. When the inner `cursor.execute` calls fail, the exceptions bubble without contextual logging.
|
||||
|
||||
## Migration Runner
|
||||
|
||||
- Lists pending migrations and logs each application attempt.
|
||||
- When the baseline is pending, the script logs whether it is a dry-run or live application and records legacy file marking. However, if `_apply_migration_file` raises an exception, the caller re-raises after logging the failure; there is no wrapping message guiding users toward manual cleanup.
|
||||
- Legacy migration marking happens silently (just info logs). Failures during the insert into `schema_migrations` would currently propagate without added guidance.
|
||||
|
||||
## Seeding Workflow
|
||||
|
||||
- `seed_baseline_data` announces each seeding phase and skips verification in dry-run mode with a log breadcrumb.
|
||||
- `_verify_seeded_data` warns about missing currencies/units and inactive defaults but does **not** raise errors, meaning CI can pass while the database is incomplete. There is no explicit log when verification succeeds.
|
||||
- `_seed_units` logs when the `measurement_unit` table is missing, which is helpful, but the warning is the only feedback; no exception is raised.
|
||||
|
||||
## Suggested Enhancements
|
||||
|
||||
1. Wrap baseline application and legacy marking in `try/except` blocks that log actionable remediation steps before re-raising.
|
||||
2. Promote seed verification failures (missing or inactive records) to exceptions so automated workflows fail fast; add success logs for clarity.
|
||||
3. Add contextual logging around currency/measurement-unit insert failures, particularly around `execute_values` calls, to aid debugging malformed data.
|
||||
4. Introduce structured logging (log codes or phases) for major steps (`CONNECT`, `MIGRATE`, `SEED`, `VERIFY`) to make scanning log files easier.
|
||||
|
||||
These findings inform the remaining TODO subtasks for enhanced error handling.
|
||||
53
docs/migrations/consolidated_baseline_plan.md
Normal file
53
docs/migrations/consolidated_baseline_plan.md
Normal file
@@ -0,0 +1,53 @@
|
||||
# Consolidated Migration Baseline Plan
|
||||
|
||||
This note outlines the content and structure of the planned baseline migration (`scripts/migrations/000_base.sql`). The objective is to capture the currently required schema changes in a single idempotent script so that fresh environments only need to apply one SQL file before proceeding with incremental migrations.
|
||||
|
||||
## Guiding Principles
|
||||
|
||||
1. **Idempotent DDL**: Every `CREATE` or `ALTER` statement must tolerate repeated execution. Use `IF NOT EXISTS` guards or existence checks (`information_schema`) where necessary.
|
||||
2. **Order of Operations**: Create reference tables first, then update dependent tables, finally enforce foreign keys and constraints.
|
||||
3. **Data Safety**: Default data seeded by migrations should be minimal and in ASCII-only form to avoid encoding issues in various shells and CI logs.
|
||||
4. **Compatibility**: The baseline must reflect the schema shape expected by the current SQLAlchemy models, API routes, and seeding scripts.
|
||||
|
||||
## Schema Elements to Include
|
||||
|
||||
### 1. `currency` Table
|
||||
|
||||
- Columns: `id SERIAL PRIMARY KEY`, `code VARCHAR(3) UNIQUE NOT NULL`, `name VARCHAR(128) NOT NULL`, `symbol VARCHAR(8)`, `is_active BOOLEAN NOT NULL DEFAULT TRUE`.
|
||||
- Index: implicit via unique constraint on `code`.
|
||||
- Seed rows matching `scripts.seed_data.CURRENCY_SEEDS` (ASCII-only symbols such as `USD$`, `CAD$`).
|
||||
- Upsert logic using `ON CONFLICT (code) DO UPDATE` to keep names/symbols in sync when rerun.
|
||||
|
||||
### 2. Currency Integration for CAPEX/OPEX
|
||||
|
||||
- Add `currency_id INTEGER` columns with `IF NOT EXISTS` guards.
|
||||
- Populate `currency_id` from legacy `currency_code` if the column exists.
|
||||
- Default null `currency_id` values to the USD row, then `ALTER` to `SET NOT NULL`.
|
||||
- Create `fk_capex_currency` and `fk_opex_currency` constraints with `ON DELETE RESTRICT` semantics.
|
||||
- Drop legacy `currency_code` column if it exists (safe because new column holds data).
|
||||
|
||||
### 3. Measurement Metadata on Consumption/Production
|
||||
|
||||
- Ensure `consumption` and `production_output` tables have `unit_name VARCHAR(64)` and `unit_symbol VARCHAR(16)` columns with `IF NOT EXISTS` guards.
|
||||
|
||||
### 4. `measurement_unit` Reference Table
|
||||
|
||||
- Columns: `id SERIAL PRIMARY KEY`, `code VARCHAR(64) UNIQUE NOT NULL`, `name VARCHAR(128) NOT NULL`, `symbol VARCHAR(16)`, `unit_type VARCHAR(32) NOT NULL`, `is_active BOOLEAN NOT NULL DEFAULT TRUE`, `created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()`, `updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()`.
|
||||
- Assume a simple trigger to maintain `updated_at` is deferred: automate via application layer later; for now, omit trigger.
|
||||
- Seed rows matching `MEASUREMENT_UNIT_SEEDS` (ASCII names/symbols). Use `ON CONFLICT (code) DO UPDATE` to keep descriptive fields aligned.
|
||||
|
||||
### 5. Transaction Handling
|
||||
|
||||
- Wrap the main operations in a single `BEGIN; ... COMMIT;` block.
|
||||
- Use subtransactions (`DO $$ ... $$;`) only where conditional logic is required (e.g., checking column existence before backfill).
|
||||
|
||||
## Migration Tracking Alignment
|
||||
|
||||
- Baseline file will be named `000_base.sql`. After execution, insert a row into `schema_migrations` with filename `000_base.sql` to keep the tracking table aligned.
|
||||
- Existing migrations (`20251021_add_currency_and_unit_fields.sql`, `20251022_create_currency_table_and_fks.sql`) remain for historical reference but will no longer be applied to new environments once the baseline is present.
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Draft `000_base.sql` reflecting the steps above.
|
||||
2. Update `run_migrations` to recognise the baseline file and mark older migrations as applied when the baseline exists.
|
||||
3. Provide documentation in `docs/quickstart.md` explaining how to reset an environment using the baseline plus seeds.
|
||||
@@ -68,13 +68,11 @@ pytest
|
||||
|
||||
E2E tests use Playwright and a session-scoped `live_server` fixture that starts the app at `http://localhost:8001` for browser-driven tests.
|
||||
|
||||
## Migrations & Currency Backfill
|
||||
## Migrations & Baseline
|
||||
|
||||
The project includes a referential `currency` table and migration/backfill tooling to normalize legacy currency fields.
|
||||
A consolidated baseline migration (`scripts/migrations/000_base.sql`) captures all schema changes required for a fresh installation. The script is idempotent: it creates the `currency` and `measurement_unit` reference tables, ensures consumption and production records expose unit metadata, and enforces the foreign keys used by CAPEX and OPEX.
|
||||
|
||||
### Run migrations and backfill (development)
|
||||
|
||||
Configure the granular database settings in your PowerShell session before running migrations.
|
||||
Configure granular database settings in your PowerShell session before running migrations:
|
||||
|
||||
```powershell
|
||||
$env:DATABASE_DRIVER = 'postgresql'
|
||||
@@ -84,14 +82,91 @@ $env:DATABASE_USER = 'calminer'
|
||||
$env:DATABASE_PASSWORD = 's3cret'
|
||||
$env:DATABASE_NAME = 'calminer'
|
||||
$env:DATABASE_SCHEMA = 'public'
|
||||
python scripts/run_migrations.py
|
||||
python scripts/backfill_currency.py --dry-run
|
||||
python scripts/backfill_currency.py --create-missing
|
||||
python scripts/setup_database.py --run-migrations --seed-data --dry-run
|
||||
python scripts/setup_database.py --run-migrations --seed-data
|
||||
```
|
||||
|
||||
The dry-run invocation reports which steps would execute without making changes. The live run applies the baseline (if not already recorded in `schema_migrations`) and seeds the reference data relied upon by the UI and API.
|
||||
|
||||
> ℹ️ The application still accepts `DATABASE_URL` as a fallback if the granular variables are not set.
|
||||
|
||||
Use `--dry-run` first to verify what will change.
|
||||
## Database bootstrap workflow
|
||||
|
||||
Provision or refresh a database instance with `scripts/setup_database.py`. Populate the required environment variables (an example lives at `config/setup_test.env.example`) and run:
|
||||
|
||||
```powershell
|
||||
# Load test credentials (PowerShell)
|
||||
Get-Content .\config\setup_test.env.example |
|
||||
ForEach-Object {
|
||||
if ($_ -and -not $_.StartsWith('#')) {
|
||||
$name, $value = $_ -split '=', 2
|
||||
Set-Item -Path Env:$name -Value $value
|
||||
}
|
||||
}
|
||||
|
||||
# Dry-run to inspect the planned actions
|
||||
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data --dry-run -v
|
||||
|
||||
# Execute the full workflow
|
||||
python scripts/setup_database.py --ensure-database --ensure-role --ensure-schema --initialize-schema --run-migrations --seed-data -v
|
||||
```
|
||||
|
||||
Typical log output confirms:
|
||||
|
||||
- Admin and application connections succeed for the supplied credentials.
|
||||
- Database and role creation are idempotent (`already present` when rerun).
|
||||
- SQLAlchemy metadata either reports missing tables or `All tables already exist`.
|
||||
- Migrations list pending files and finish with `Applied N migrations` (a new database reports `Applied 1 migrations` for `000_base.sql`).
|
||||
|
||||
After a successful run the target database contains all application tables plus `schema_migrations`, and that table records each applied migration file. New installations only record `000_base.sql`; upgraded environments retain historical entries alongside the baseline.
|
||||
|
||||
### Seeding reference data
|
||||
|
||||
`scripts/seed_data.py` provides targeted control over the baseline datasets when the full setup script is not required:
|
||||
|
||||
```powershell
|
||||
python scripts/seed_data.py --currencies --units --dry-run
|
||||
python scripts/seed_data.py --currencies --units
|
||||
```
|
||||
|
||||
The seeder upserts the canonical currency catalog (`USD`, `EUR`, `CLP`, `RMB`, `GBP`, `CAD`, `AUD`) using ASCII-safe symbols (`USD$`, `EUR`, etc.) and the measurement units referenced by the UI (`tonnes`, `kilograms`, `pounds`, `liters`, `cubic_meters`, `kilowatt_hours`). The setup script invokes the same seeder when `--seed-data` is provided and verifies the expected rows afterward, warning if any are missing or inactive.
|
||||
|
||||
### Rollback guidance
|
||||
|
||||
`scripts/setup_database.py` now tracks compensating actions when it creates the database or application role. If a later step fails, the script replays those rollback actions (dropping the newly created database or role and revoking grants) before exiting. Dry runs never register rollback steps and remain read-only.
|
||||
|
||||
If the script reports that some rollback steps could not complete—for example because a connection cannot be established—rerun the script with `--dry-run` to confirm the desired end state and then apply the outstanding cleanup manually:
|
||||
|
||||
```powershell
|
||||
python scripts/setup_database.py --ensure-database --ensure-role --dry-run -v
|
||||
|
||||
# Manual cleanup examples when automation cannot connect
|
||||
psql -d postgres -c "DROP DATABASE IF EXISTS calminer"
|
||||
psql -d postgres -c "DROP ROLE IF EXISTS calminer"
|
||||
```
|
||||
|
||||
After a failure and rollback, rerun the full setup once the environment issues are resolved.
|
||||
|
||||
### CI pipeline environment
|
||||
|
||||
The `.gitea/workflows/test.yml` job spins up a temporary PostgreSQL 16 container and runs the setup script twice: once with `--dry-run` to validate the plan and again without it to apply migrations and seeds. No external secrets are required; the workflow sets the following environment variables for both invocations and for pytest:
|
||||
|
||||
| Variable | Value | Purpose |
|
||||
| --- | --- | --- |
|
||||
| `DATABASE_DRIVER` | `postgresql` | Signals the driver to the setup script |
|
||||
| `DATABASE_HOST` | `127.0.0.1` | Points to the linked job service |
|
||||
| `DATABASE_PORT` | `5432` | Default service port |
|
||||
| `DATABASE_NAME` | `calminer_ci` | Target database created by the workflow |
|
||||
| `DATABASE_USER` | `calminer` | Application role used during tests |
|
||||
| `DATABASE_PASSWORD` | `secret` | Password for both admin and app role |
|
||||
| `DATABASE_SCHEMA` | `public` | Default schema for the tests |
|
||||
| `DATABASE_SUPERUSER` | `calminer` | Setup script uses the same role for admin actions |
|
||||
| `DATABASE_SUPERUSER_PASSWORD` | `secret` | Matches the Postgres service password |
|
||||
| `DATABASE_SUPERUSER_DB` | `calminer_ci` | Database to connect to for admin operations |
|
||||
|
||||
The workflow also updates `DATABASE_URL` for pytest to point at the CI Postgres instance. Existing tests continue to work unchanged, since SQLAlchemy reads the URL exactly as it does locally.
|
||||
|
||||
Because the workflow provisions everything inline, no repository or organization secrets need to be configured for basic CI runs. If you later move the setup step to staging or production pipelines, replace these inline values with secrets managed by the CI platform.
|
||||
|
||||
## Database Objects
|
||||
|
||||
|
||||
78
docs/seed_data_plan.md
Normal file
78
docs/seed_data_plan.md
Normal file
@@ -0,0 +1,78 @@
|
||||
# Baseline Seed Data Plan
|
||||
|
||||
This document captures the datasets that should be present in a fresh CalMiner installation and the structure required to manage them through `scripts/seed_data.py`.
|
||||
|
||||
## Currency Catalog
|
||||
|
||||
The `currency` table already exists and is seeded today via `scripts/seed_data.py`. The goal is to keep the canonical list in one place and ensure the default currency (USD) is always active.
|
||||
|
||||
| Code | Name | Symbol | Notes |
|
||||
| ---- | ------------------- | ------ | ---------------------------------------- |
|
||||
| USD | US Dollar | $ | Default currency (`DEFAULT_CURRENCY_CODE`) |
|
||||
| EUR | Euro | EUR symbol | |
|
||||
| CLP | Chilean Peso | $ | |
|
||||
| RMB | Chinese Yuan | RMB symbol | |
|
||||
| GBP | British Pound | GBP symbol | |
|
||||
| CAD | Canadian Dollar | $ | |
|
||||
| AUD | Australian Dollar | $ | |
|
||||
|
||||
Seeding behaviour:
|
||||
|
||||
- Upsert by ISO code; keep existing name/symbol when updated manually.
|
||||
- Ensure `is_active` remains true for USD and defaults to true for new rows.
|
||||
- Defer to runtime validation in `routes.currencies` for enforcing default behaviour.
|
||||
|
||||
## Measurement Units
|
||||
|
||||
UI routes (`routes/ui.py`) currently rely on the in-memory `MEASUREMENT_UNITS` list to populate dropdowns for consumption and production forms. To make this configurable and available to the API, introduce a dedicated `measurement_unit` table and seed it.
|
||||
|
||||
Proposed schema:
|
||||
|
||||
| Column | Type | Notes |
|
||||
| ------------- | -------------- | ------------------------------------ |
|
||||
| id | SERIAL / BIGINT | Primary key. |
|
||||
| code | TEXT | Stable slug (e.g. `tonnes`). Unique. |
|
||||
| name | TEXT | Display label. |
|
||||
| symbol | TEXT | Short symbol (nullable). |
|
||||
| unit_type | TEXT | Category (`mass`, `volume`, `energy`).|
|
||||
| is_active | BOOLEAN | Default `true` for soft disabling. |
|
||||
| created_at | TIMESTAMP | Optional `NOW()` default. |
|
||||
| updated_at | TIMESTAMP | Optional `NOW()` trigger/default. |
|
||||
|
||||
Initial seed set (mirrors existing UI list plus type categorisation):
|
||||
|
||||
| Code | Name | Symbol | Unit Type |
|
||||
| --------------- | ---------------- | ------ | --------- |
|
||||
| tonnes | Tonnes | t | mass |
|
||||
| kilograms | Kilograms | kg | mass |
|
||||
| pounds | Pounds | lb | mass |
|
||||
| liters | Liters | L | volume |
|
||||
| cubic_meters | Cubic Meters | m3 | volume |
|
||||
| kilowatt_hours | Kilowatt Hours | kWh | energy |
|
||||
|
||||
Seeding behaviour:
|
||||
|
||||
- Upsert rows by `code`.
|
||||
- Preserve `unit_type` and `symbol` unless explicitly changed via administration tooling.
|
||||
- Continue surfacing unit options to the UI by querying this table instead of the static constant.
|
||||
|
||||
## Default Settings
|
||||
|
||||
The application expects certain defaults to exist:
|
||||
|
||||
- **Default currency**: enforced by `routes.currencies._ensure_default_currency`; ensure seeds keep USD active.
|
||||
- **Fallback measurement unit**: UI currently auto-selects the first option in the list. Once units move to the database, expose an application setting to choose a fallback (future work tracked under "Application Settings management").
|
||||
|
||||
## Seeding Structure Updates
|
||||
|
||||
To support the datasets above:
|
||||
|
||||
1. Extend `scripts/seed_data.py` with a `SeedDataset` registry so each dataset (currencies, units, future defaults) can declare its loader/upsert function and optional dependencies.
|
||||
2. Add a `--dataset` CLI selector for targeted seeding while keeping `--all` as the default for `setup_database.py` integrations.
|
||||
3. Update `scripts/setup_database.py` to:
|
||||
- Run migration ensuring `measurement_unit` table exists.
|
||||
- Execute the unit seeder after currencies when `--seed-data` is supplied.
|
||||
- Verify post-seed counts, logging which dataset was inserted/updated.
|
||||
4. Adjust UI routes to load measurement units from the database and remove the hard-coded list once the table is available.
|
||||
|
||||
This plan aligns with the TODO item for seeding initial data and lays the groundwork for consolidating migrations around a single baseline file that introduces both the schema and seed data in an idempotent manner.
|
||||
Reference in New Issue
Block a user