Initialize project documentation

Add architecture documentation and functional requirements for Calminer project

- Created Building Block View (05_building_block_view.md) detailing system architecture and component interactions.
- Developed Runtime View (06_runtime_view.md) outlining key user scenarios and interactions within the system.
- Established Deployment View (07_deployment_view.md) describing the infrastructure and mapping of building blocks to deployment components.
- Added README.md for architecture documentation structure.
- Introduced functional requirements (FR-001 to FR-010) covering scenario management, data import/export, reporting, user management, and collaboration features.
- Included templates for documenting requirements to ensure consistency across the project.
This commit is contained in:
2025-11-08 19:49:07 +01:00
commit ad56c3c610
23 changed files with 2203 additions and 0 deletions

View File

@@ -0,0 +1,57 @@
# Introduction and Goals
CalMiner aims to provide a comprehensive platform for mining project scenario analysis, enabling stakeholders to make informed decisions based on data-driven insights.
## Business Goals
- **Optimize Project Planning**: Provide tools that help mining companies plan projects more effectively by analyzing various scenarios and their potential outcomes.
- **Enhance Financial Analysis**: Enable detailed financial assessments of mining projects to support investment decisions.
- **Improve Decision-Making**: Offer data-driven insights that empower stakeholders to make informed choices regarding mining operations.
## Driving Forces
| Driving Force | Rationale |
| -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| Market Demand | There is a growing need for advanced analytics in the mining industry to optimize operations and reduce costs. |
| Regulatory Compliance | Mining companies must adhere to strict regulations regarding environmental impact and resource management, necessitating robust analysis tools. |
| Technological Advancements | Rapid advancements in data analytics and machine learning present opportunities for more sophisticated scenario analysis. |
| Stakeholder Collaboration | Increased collaboration among stakeholders requires a platform that facilitates shared access to data and insights. |
| Sustainability Goals | Increased focus on sustainable practices, driving the need for tools that can assess environmental and social impacts alongside financial metrics. |
| Cost Reduction | Mining companies are under constant pressure to reduce operational costs, making efficient and effective analysis tools essential. |
| Dynamic Market Conditions | The volatility of commodity prices and market conditions necessitates flexible and adaptive scenario planning capabilities. |
| Data Integration | The ability to integrate diverse data sources, including geological, financial, and operational data, is crucial for comprehensive analysis. |
## Key Features
- Advanced project planning tools
- Financial analysis and reporting
- Data integration capabilities
- Advanced analytics and machine learning
- User-friendly interface
- Custom reporting options
- Collaboration tools
## Functional Requirements
A detailed list of functional requirements can be found in the [Requirements Document](../requirements/requirements.md). Key functionalities include project planning tools, financial analysis modules, and data integration capabilities.
## Quality Goals
| Quality Goal | Scenario | Priority |
| ------------------------------- | ------------------------------------------------------------------------------------------------------- | -------- |
| Comprehensive Scenario Analysis | Users can create and analyze multiple project scenarios to assess risks and opportunities. | High |
| Data-Driven Decision Making | Stakeholders have access to real-time data and analytics to inform their decisions. | High |
| User-Friendly Interface | The platform is designed with an intuitive interface that requires minimal training for new users. | Medium |
| Security | Sensitive data is protected through robust security measures, including encryption and access controls. | Medium |
| Scalability | The system can handle increasing amounts of data and users without performance degradation. | Low |
| Maintainability | The architecture allows for easy updates and maintenance with minimal downtime. | Low |
## Stakeholders
| Stakeholder | Role/Interest |
| -------------------- | -------------------------------------------------------------------------------------------- |
| Mining Companies | Primary users interested in optimizing project planning and financial analysis. |
| Project Managers | Responsible for overseeing mining projects and ensuring successful execution. |
| Financial Analysts | Focused on evaluating the financial viability of mining projects using the platform's tools. |
| Executive Leadership | Interested in high-level insights and strategic decision-making based on scenario analyses. |
| Investors | Concerned with the financial performance and risk assessment of mining projects. |

View File

@@ -0,0 +1,139 @@
# Architecture Constraints
## Table of Contents
- [Architecture Constraints](#architecture-constraints)
- [Table of Contents](#table-of-contents)
- [Technical Constraints](#technical-constraints)
- [Framework Selection](#framework-selection)
- [Database Technology](#database-technology)
- [Frontend Technologies](#frontend-technologies)
- [Simulation Logic](#simulation-logic)
- [Organizational and Political Constraints](#organizational-and-political-constraints)
- [Team Expertise](#team-expertise)
- [Development Processes](#development-processes)
- [Collaboration Tools](#collaboration-tools)
- [Documentation Standards](#documentation-standards)
- [Knowledge Sharing](#knowledge-sharing)
- [Resource Availability](#resource-availability)
- [Regulatory Constraints](#regulatory-constraints)
- [Data Privacy Compliance](#data-privacy-compliance)
- [Industry Standards](#industry-standards)
- [Auditability](#auditability)
- [Data Retention Policies](#data-retention-policies)
- [Security Standards](#security-standards)
- [Environmental Constraints](#environmental-constraints)
- [Deployment Environments](#deployment-environments)
- [Cloud Provider Limitations](#cloud-provider-limitations)
- [Containerization](#containerization)
- [Scalability Requirements](#scalability-requirements)
- [Performance Constraints](#performance-constraints)
- [Response Time](#response-time)
- [Scalability Needs](#scalability-needs)
- [Conventions](#conventions)
- [Programming Language](#programming-language)
- [Versioning](#versioning)
## Technical Constraints
### Framework Selection
The choice of FastAPI as the web framework imposes constraints on how the application handles requests, routing, and middleware. FastAPI's asynchronous capabilities must be leveraged appropriately to ensure optimal performance.
### Database Technology
The use of PostgreSQL as the primary database system dictates the data modeling, querying capabilities, and transaction management strategies. SQLAlchemy ORM is used for database interactions, which requires adherence to its conventions and limitations.
### Frontend Technologies
The decision to use Jinja2 for server-side templating and Chart.js for data visualization influences the structure of the frontend code and the way dynamic content is rendered.
### Simulation Logic
The Monte Carlo simulation logic must be designed to efficiently handle large datasets and perform computations within the constraints of the chosen programming language (Python) and its libraries.
## Organizational and Political Constraints
### Team Expertise
The development team's familiarity with FastAPI, SQLAlchemy, and frontend technologies like Jinja2 and Chart.js influences the architecture choices to ensure maintainability and ease of development.
### Development Processes
The adoption of Agile methodologies and CI/CD pipelines (using Gitea Actions) shapes the architecture to support continuous integration, automated testing, and deployment practices.
### Collaboration Tools
The use of specific collaboration and version control tools (e.g., Gitea) affects how code is managed, reviewed, and integrated, impacting the overall architecture and development workflow.
### Documentation Standards
The requirement for comprehensive documentation (as seen in the `docs/` folder) necessitates an architecture that is well-structured and easy to understand for both current and future team members.
### Knowledge Sharing
The need for effective knowledge sharing and onboarding processes influences the architecture to ensure that it is accessible and understandable for new team members.
### Resource Availability
The availability of hardware, software, and human resources within the organization can impose constraints on the architecture, affecting decisions related to scalability, performance, and feature implementation.
## Regulatory Constraints
### Data Privacy Compliance
The architecture must ensure compliance with data privacy regulations such as GDPR or CCPA, which may dictate how user data is collected, stored, and processed.
### Industry Standards
Adherence to industry-specific standards and best practices may influence the design of data models, security measures, and reporting functionalities.
### Auditability
The system may need to incorporate logging and auditing features to meet regulatory requirements, affecting the architecture of data storage and access controls.
### Data Retention Policies
Regulatory requirements regarding data retention and deletion may impose constraints on how long certain types of data can be stored, influencing database design and data lifecycle management.
### Security Standards
Compliance with security standards (e.g., ISO/IEC 27001) may necessitate the implementation of specific security measures, such as encryption, access controls, and vulnerability management, which impact the overall architecture.
## Environmental Constraints
### Deployment Environments
The architecture must accommodate various deployment environments (development, testing, production) with differing configurations and resource allocations.
### Cloud Provider Limitations
If deployed on a specific cloud provider, the architecture may need to align with the provider's services, limitations, and best practices, such as using managed databases or specific container orchestration tools.
### Containerization
The use of Docker for containerization imposes constraints on how the application is packaged, deployed, and scaled, influencing the architecture to ensure compatibility with container orchestration platforms.
### Scalability Requirements
The architecture must be designed to scale efficiently based on anticipated load and usage patterns, considering the limitations of the chosen infrastructure.
## Performance Constraints
### Response Time
The system must ensure that user interactions, such as data retrieval and report generation, occur within acceptable time frames to maintain a positive user experience.
### Scalability Needs
The architecture should support scaling to accommodate varying workloads, ensuring consistent performance during peak usage periods without significant degradation.
## Conventions
### Programming Language
The system is developed using Python, and all code must adhere to PEP 8 style guidelines to ensure consistency and readability across the codebase.
### Versioning
Semantic Versioning (SemVer) is used for all releases to clearly communicate changes and compatibility.

View File

@@ -0,0 +1,130 @@
# Context and Scope
## Table of Contents
- [Context and Scope](#context-and-scope)
- [Table of Contents](#table-of-contents)
- [Business Context](#business-context)
- [Users](#users)
- [Systems](#systems)
- [Technical Context](#technical-context)
- [Communication Channels and Protocols](#communication-channels-and-protocols)
## Business Context
The business context for the system includes various stakeholders such as end-users, business analysts, and regulatory bodies. Each of these stakeholders has specific needs and expectations regarding the system's functionality and data handling.
### Users
```mermaid
graph TD
User[Users]
System[System]
System -->|Outputs| User
User -->|Inputs| System
```
- Executive Management: Reviews high-level reports generated by the system for strategic planning.
- Project Managers: Use the system to plan projects and monitor progress.
- Business Analysts: Utilize system outputs for decision-making and strategy formulation.
- Financial Analysts: Analyze financial data produced by the system to guide investment decisions.
- Administrators: Manage system configurations and user access.
- DevOps: Oversee deployment and integration processes, ensuring system reliability and performance.
| User | Inputs | Outputs |
| -------------------- | ----------------------- | -------------------------- |
| Executive Management | Strategic data requests | High-level reports |
| Project Managers | Project plans | Progress updates |
| Business Analysts | Data queries | Analytical insights |
| Financial Analysts | Financial data | Investment reports |
| Administrators | Configuration changes | System status |
| DevOps | Deployment scripts | System performance metrics |
```mermaid
graph LR
subgraph CalMiner
System[System]
end
EM[Executive Management]
PM[Project Managers]
BA[Business Analysts]
FA[Financial Analysts]
EM -->|Strategic data requests| System
System -->|High-level reports| EM
PM -->|Project plans| System
System -->|Progress updates| PM
BA -->|Data queries| System
System -->|Analytical insights| BA
FA -->|Financial data| System
System -->|Investment reports| FA
```
### Systems
- Database Systems: Store and retrieve data used and generated by the system.
- Data Warehouse: Provides historical data for analysis and reporting.
- External APIs: Supplies real-time data inputs for scenario analysis.
- Authentication Service: Manages user authentication and authorization.
- Reporting Tools: Integrates with the system to generate customized reports.
- Monitoring Systems: Tracks system performance and health metrics.
- CI/CD Pipeline: Facilitates automated deployment and integration processes.
- Logging Service: Collects and stores system logs for auditing and troubleshooting purposes.
- Backup System: Ensures data integrity and recovery in case of failures.
- Notification Service: Sends alerts and notifications to users based on system events.
| Systems | Inputs | Outputs |
| ---------------------- | -------------------------- | --------------------- |
| Database Systems | Data read/write requests | Stored/retrieved data |
| Data Warehouse | Historical data queries | Historical datasets |
| External APIs | Real-time data requests | Real-time data feeds |
| Authentication Service | Login requests | Authentication tokens |
| Reporting Tools | Report generation requests | Customized reports |
| Monitoring Systems | Performance data | Health metrics |
| CI/CD Pipeline | Code commits | Deployed applications |
| Logging Service | Log entries | Stored logs |
| Backup System | Backup requests | Restored data |
| Notification Service | Alert triggers | User notifications |
```mermaid
graph LR
subgraph CalMiner
System[System]
end
DB[Database Systems]
DW[Data Warehouse]
API[External APIs]
Auth[Authentication Service]
DB -->|Data read/write requests| System
System -->|Stored/retrieved data| DB
DW -->|Historical data queries| System
System -->|Historical datasets| DW
API -->|Real-time data requests| System
System -->|Real-time data feeds| API
Auth -->|Login requests| System
System -->|Authentication tokens| Auth
```
## Technical Context
### Communication Channels and Protocols
| Communication Partner | Channel/Protocol | Description |
| ---------------------- | ------------------ | ------------------------------------------------- |
| Database Systems | TCP/IP, SQL | Standard database communication protocols |
| Data Warehouse | ODBC/JDBC | Database connectivity protocols |
| External APIs | RESTful HTTP, JSON | Web service communication protocols |
| Authentication Service | OAuth 2.0, HTTPS | Secure authentication and authorization protocols |
| Reporting Tools | HTTP, PDF/Excel | Report generation and delivery protocols |
| Monitoring Systems | SNMP, HTTP | System monitoring and alerting protocols |
| CI/CD Pipeline | Git, SSH | Code versioning and deployment protocols |
| Notification Service | SMTP, Webhooks | Alert and notification delivery protocols |

View File

@@ -0,0 +1,83 @@
# Solution Strategy
## Table of Contents
- [Solution Strategy](#solution-strategy)
- [Table of Contents](#table-of-contents)
- [Technology Decisions](#technology-decisions)
- [Programming Language](#programming-language)
- [Web Framework](#web-framework)
- [Database](#database)
- [Frontend Technologies](#frontend-technologies)
- [Architectural Patterns](#architectural-patterns)
- [Layered Architecture](#layered-architecture)
- [Client-Server Pattern](#client-server-pattern)
- [Containerization](#containerization)
- [Quality Goals Achievement](#quality-goals-achievement)
- [Comprehensive Scenario Analysis](#comprehensive-scenario-analysis)
- [Data-Driven Decision Making](#data-driven-decision-making)
- [User-Friendly Interface](#user-friendly-interface)
- [Security](#security)
- [Scalability](#scalability)
- [Organizational Decisions](#organizational-decisions)
- [Development Process](#development-process)
## Technology Decisions
### Programming Language
Python as the primary programming language for its simplicity, readability, and extensive libraries that support rapid development and data analysis.
### Web Framework
FastAPI as the web framework due to its high performance, ease of use, and support for asynchronous programming, which is essential for handling multiple concurrent requests efficiently.
### Database
PostgreSQL as the database system for its robustness, scalability, and strong support for complex queries, which are necessary for managing the application's data effectively.
### Frontend Technologies
Jinja2 for server-side templating to dynamically generate HTML pages, and Chart.js for data visualization to provide interactive charts and graphs for users.
## Architectural Patterns
### Layered Architecture
The system follows a layered architecture pattern, separating concerns into distinct layers: presentation, business logic, and data access. This separation enhances maintainability, scalability, and testability of the application.
### Client-Server Pattern
The application is designed using the client-server pattern, where the client (frontend) interacts with the server (backend) through RESTful APIs. This separation allows for independent development and scaling of the client and server components.
### Containerization
The application is containerized using Docker or podman to ensure consistency across different deployment environments, facilitate scalability, and simplify the deployment process.
## Quality Goals Achievement
### Comprehensive Scenario Analysis
The system employs efficient data processing algorithms and leverages PostgreSQL's capabilities to handle large datasets, enabling users to create and analyze multiple project scenarios effectively.
### Data-Driven Decision Making
Stakeholders have access to real-time data and analytics to inform their decisions.
### User-Friendly Interface
The platform is designed with an intuitive interface that requires minimal training for new users.
### Security
Sensitive data is protected through robust security measures, including encryption and access controls.
### Scalability
The system is designed to scale horizontally by adding more instances of services as needed. This is facilitated by the use of containerization and orchestration tools like Kubernetes, which manage the deployment and scaling of containerized applications.
## Organizational Decisions
### Development Process
The development team follows an Agile methodology, allowing for iterative development, continuous feedback, and adaptability to changing requirements. This approach enhances collaboration among team members and stakeholders, ensuring that the final product meets user needs effectively.

View File

@@ -0,0 +1,221 @@
# Building Block View
## Table of Contents
- [Building Block View](#building-block-view)
- [Table of Contents](#table-of-contents)
- [Whitebox Overall System](#whitebox-overall-system)
- [Level 1 Diagram](#level-1-diagram)
- [API Layer](#api-layer)
- [Service Layer](#service-layer)
- [Data Access Layer](#data-access-layer)
- [Frontend Layer](#frontend-layer)
- [Database System](#database-system)
- [Level 2](#level-2)
- [Level 2 Diagram](#level-2-diagram)
- [Frontend Components](#frontend-components)
- [User Interface](#user-interface)
- [Visualization Module](#visualization-module)
- [Backend Components](#backend-components)
- [Authentication Service](#authentication-service)
- [Reporting Module](#reporting-module)
- [Simulation Engine](#simulation-engine)
- [Level 3](#level-3)
- [Simulation Engine Components](#simulation-engine-components)
- [Mining Algorithm](#mining-algorithm)
- [Data Preprocessing](#data-preprocessing)
- [Result Postprocessing](#result-postprocessing)
## Whitebox Overall System
This diagram shows the main building blocks of the Calminer system and their relationships.
### Level 1 Diagram
```mermaid
graph TD
APILayer[API Layer]
ServiceLayer[Service Layer]
DataAccessLayer[Data Access Layer]
FrontendLayer[Frontend Layer]
DatabaseSystem[Database System]
APILayer --> ServiceLayer
ServiceLayer --> DataAccessLayer
FrontendLayer --> APILayer
DataAccessLayer --> DatabaseSystem
```
### API Layer
The API Layer is responsible for handling incoming requests and routing them to the appropriate services. It provides a RESTful interface for external clients to interact with the Calminer system.
_Responsibility:_ Handle HTTP requests and responses.
_Interface:_ RESTful API endpoints.
_Dependencies:_ Depends on the Service Layer for business logic processing.
### Service Layer
The Service Layer contains the core business logic of the Calminer application. It processes data, applies mining algorithms, and manages workflows between different components.
_Responsibility:_ Implement business rules and data processing.
_Interface:_ Service interfaces for communication with the API Layer.
_Dependencies:_ Depends on the Data Access Layer for data persistence.
### Data Access Layer
The Data Access Layer is responsible for interacting with the underlying data storage systems. It provides an abstraction over the data sources and handles all CRUD operations.
_Responsibility:_ Manage data storage and retrieval.
_Interface:_ Data access interfaces for the Service Layer.
_Dependencies:_ Depends on the Database System for persistent storage.
### Frontend Layer
The Frontend Layer is responsible for the user interface of the Calminer application. It provides a web-based interface for users to create projects, configure parameters, and view reports.
_Responsibility:_ Render user interface and handle user interactions.
_Interface:_ Web pages and client-side scripts.
_Dependencies:_ Depends on the API Layer for backend communication.
### Database System
The Database System is responsible for storing and managing all persistent data used by the Calminer application, including user data, project configurations, and analysis results.
_Responsibility:_ Manage persistent data storage and retrieval.
_Interface:_ Database access interfaces for the Data Access Layer.
_Dependencies:_ None.
## Level 2
### Level 2 Diagram
```mermaid
graph TB
subgraph F[Frontend Layer]
direction TB
UserInterface[User Interface]
VisualizationModule[Visualization Module]
end
subgraph A[API Layer]
APILayer[API Layer]
end
subgraph S[Service Layer]
direction TB
AuthenticationService[Authentication Service]
ReportingModule[Reporting Module]
SimulationEngine[Simulation Engine]
end
subgraph D[Data Access Layer]
DataAccessLayer[Data Access Layer]
end
subgraph DB[Database System]
DatabaseSystem[Database System]
end
DataAccessLayer[Data Access Layer]
DatabaseSystem[Database System]
F --> A
A --> S
S --> D
D --> DB
```
### Frontend Components
#### User Interface
The User Interface component is responsible for rendering the web pages and handling user interactions.
_Responsibility:_ Display data and capture user input.
_Interface:_ HTML, CSS, JavaScript.
_Dependencies:_ Depends on the API Layer for data retrieval.
#### Visualization Module
The Visualization Module provides data visualization capabilities, allowing users to view analysis results in graphical formats.
_Responsibility:_ Generate charts and graphs.
_Interface:_ Chart.js library.
_Dependencies:_ Depends on the API Layer for data.
### Backend Components
#### Authentication Service
The Authentication Service manages user authentication and authorization.
_Responsibility:_ Handle user login and access control.
_Interface:_ Authentication APIs.
_Dependencies:_ Depends on the Database System for user data.
#### Reporting Module
The Reporting Module generates comprehensive reports based on analysis results.
_Responsibility:_ Create and format reports.
_Interface:_ Report generation APIs.
_Dependencies:_ Depends on the Service Layer for data processing.
#### Simulation Engine
The Simulation Engine performs the core mining simulations and calculations.
_Responsibility:_ Execute mining algorithms.
_Interface:_ Simulation APIs.
_Dependencies:_ Depends on the Data Access Layer for data retrieval and storage.
## Level 3
### Simulation Engine Components
#### Mining Algorithm
The Mining Algorithm component is responsible for implementing the core mining logic.
_Responsibility:_ Execute mining algorithms on the input data.
_Interface:_ Algorithm APIs.
_Dependencies:_ Depends on the Data Access Layer for data retrieval.
#### Data Preprocessing
The Data Preprocessing component handles the preparation of input data for mining.
_Responsibility:_ Clean and transform input data.
_Interface:_ Data preprocessing APIs.
_Dependencies:_ Depends on the Data Access Layer for data retrieval.
#### Result Postprocessing
The Result Postprocessing component is responsible for formatting and refining the mining results.
_Responsibility:_ Prepare final results for presentation.
_Interface:_ Result formatting APIs.
_Dependencies:_ Depends on the Data Access Layer for data storage.

View File

@@ -0,0 +1,346 @@
# Runtime View
## Table of Contents
- [Runtime View](#runtime-view)
- [Table of Contents](#table-of-contents)
- [Login Process](#login-process)
- [Project Creation](#project-creation)
- [Mining Method Selection](#mining-method-selection)
- [Scenario Creation](#scenario-creation)
- [Default Business Scenario](#default-business-scenario)
- [Parameter Configuration](#parameter-configuration)
- [Basic Accounting Setup](#basic-accounting-setup)
- [Currency Setup](#currency-setup)
- [Report Generation](#report-generation)
- [User Management](#user-management)
<!--
The runtime view describes concrete behavior and interactions of the system's building blocks in form of scenarios from the following areas:
- important use cases or features: how do building blocks execute them?
- interactions at critical external interfaces: how do building blocks cooperate with users and neighboring systems?
- operation and administration: launch, start-up, stop
- error and exception scenarios
Remark: The main criterion for the choice of possible scenarios (sequences, workflows) is their _architectural relevance_. It is _not_ important to describe a large number of scenarios. You should rather document a representative selection.
_Motivation:_ You should understand how (instances of) building blocks of your system perform their job and communicate at runtime. You will mainly capture scenarios in your documentation to communicate your architecture to stakeholders that are less willing or able to read and understand the static models (building block view, deployment view).
_Form:_ There are many notations for describing scenarios, e.g. numbered list of steps (in natural language), activity diagrams or flow charts, sequence diagrams, BPMN or EPCs (event process chains), state machines, ...
## Runtime Scenario 1
- _<insert runtime diagram or textual description of the scenario>_
- _<insert description of the notable aspects of the interactions between the building block instances depicted in this diagram.>_
-->
## Login Process
1. The user navigates to the login page.
2. The user enters their credentials (username and password).
3. The API Layer receives the login request and forwards it to the Authentication Service in the Service Layer.
4. The Authentication Service validates the credentials against the user data stored in the Database System via the Data Access Layer.
5. If the credentials are valid, the Authentication Service generates a session token and sends it back to the API Layer.
6. The API Layer returns the session token to the user, granting access to the system.
```mermaid
sequenceDiagram
autonumber
participant U as User
participant API as API Layer
participant AS as Authentication Service
participant DB as Database System
U->>API: Submit login credentials
API->>AS: Forward credentials
AS->>DB: Validate credentials
DB-->>AS: Return validation result
alt Valid Credentials
AS->>API: Generate session token
API->>U: Return session token
else Invalid Credentials
AS->>API: Return error message
API->>U: Return error message
end
```
## Project Creation
### Mining Method Selection
1. The user selects the option to create a new project.
2. The API Layer presents the user with a list of available mining methods.
3. The user selects a mining method from the list.
4. The API Layer forwards the selection to the Service Layer.
5. The Service Layer retrieves the details of the selected mining method from the Data Access Layer.
6. The Data Access Layer queries the Database System for the mining method information.
7. The Database System returns the mining method details to the Data Access Layer.
8. The Data Access Layer sends the mining method details back to the Service Layer.
9. The Service Layer returns the mining method details to the API Layer.
10. The API Layer displays the mining method details to the user for confirmation.
```mermaid
sequenceDiagram
autonumber
participant U as User
participant API as API Layer
participant S as Service Layer
participant D as Data Access Layer
U->>API: Select mining method
API->>S: Forward selection
S->>D: Retrieve available methods
D-->>S: Return methods
S-->>API: Return methods
API->>U: Display methods
U->>API: Confirm selection
API->>S: Forward confirmation
S->>D: Store selected method
D-->>S: Acknowledge storage
```
## Scenario Creation
### Default Business Scenario
1. The user selects the option to create a default business scenario.
2. The API Layer receives the request and forwards it to the Service Layer.
3. The Service Layer initiates the creation of a default scenario in the Data Access Layer.
4. The Data Access Layer creates the default scenario in the Database System.
5. The Database System acknowledges the creation of the scenario to the Data Access Layer.
6. The Data Access Layer confirms the scenario creation to the Service Layer.
7. The Service Layer notifies the API Layer of the successful scenario creation.
8. The API Layer informs the user that the default business scenario has been successfully created.
```mermaid
sequenceDiagram
autonumber
participant U as User
participant API as API Layer
participant S as Service Layer
participant D as Data Access Layer
U->>API: Request default scenario creation
API->>S: Forward request
S->>D: Create default scenario
D-->>S: Acknowledge creation
S-->>API: Confirm scenario creation
API->>U: Notify user of successful creation
```
## Parameter Configuration
1. The user navigates to the parameter configuration section.
2. The API Layer retrieves the current configuration parameters from the Service Layer.
3. The Service Layer requests the parameters from the Data Access Layer.
4. The Data Access Layer queries the Database System for the configuration parameters.
5. The Database System returns the configuration parameters to the Data Access Layer.
6. The Data Access Layer sends the parameters back to the Service Layer.
7. The Service Layer forwards the parameters to the API Layer.
8. The API Layer displays the current configuration parameters to the user.
9. The user modifies the desired parameters and submits the changes.
10. The API Layer forwards the updated parameters to the Service Layer.
11. The Service Layer updates the parameters in the Data Access Layer.
12. The Data Access Layer saves the updated parameters in the Database System.
13. The Database System acknowledges the update to the Data Access Layer.
14. The Data Access Layer confirms the update to the Service Layer.
15. The Service Layer notifies the API Layer of the successful parameter update.
16. The API Layer informs the user that the parameters have been successfully updated.
```mermaid
sequenceDiagram
autonumber
participant U as User
participant API as API Layer
participant S as Service Layer
participant D as Data Access Layer
participant DB as Database System
U->>API: Request current parameters
API->>S: Forward request
S->>D: Retrieve parameters
D->>DB: Query parameters
DB-->>D: Return parameters
D-->>S: Send parameters
S-->>API: Forward parameters
API->>U: Display parameters
U->>API: Submit updated parameters
API->>S: Forward updates
S->>D: Update parameters
D->>DB: Save updated parameters
DB-->>D: Acknowledge update
D-->>S: Confirm update
S-->>API: Notify successful update
API->>U: Inform user of success
```
## Basic Accounting Setup
1. The user accesses the basic accounting setup section.
2. The API Layer retrieves the current accounting settings from the Service Layer.
3. The Service Layer requests the settings from the Data Access Layer.
4. The Data Access Layer queries the Database System for the accounting settings.
5. The Database System returns the accounting settings to the Data Access Layer.
6. The Data Access Layer sends the settings back to the Service Layer.
7. The Service Layer forwards the settings to the API Layer.
8. The API Layer displays the current accounting settings to the user.
9. The user modifies the accounting settings and submits the changes.
10. The API Layer forwards the updated settings to the Service Layer.
11. The Service Layer updates the settings in the Data Access Layer.
12. The Data Access Layer saves the updated settings in the Database System.
13. The Database System acknowledges the update to the Data Access Layer.
14. The Data Access Layer confirms the update to the Service Layer.
15. The Service Layer notifies the API Layer of the successful accounting settings update.
16. The API Layer informs the user that the accounting settings have been successfully updated.
```mermaid
sequenceDiagram
autonumber
participant U as User
participant API as API Layer
participant S as Service Layer
participant D as Data Access Layer
participant DB as Database System
U->>API: Request current accounting settings
API->>S: Forward request
S->>D: Retrieve settings
D->>DB: Query settings
DB-->>D: Return settings
D-->>S: Send settings
S-->>API: Forward settings
API->>U: Display settings
U->>API: Submit updated settings
API->>S: Forward updates
S->>D: Update settings
D->>DB: Save updated settings
DB-->>D: Acknowledge update
D-->>S: Confirm update
S-->>API: Notify successful update
API->>U: Inform user of success
```
## Currency Setup
1. The user navigates to the currency setup section.
2. The API Layer retrieves the current currency settings from the Service Layer.
3. The Service Layer requests the settings from the Data Access Layer.
4. The Data Access Layer queries the Database System for the currency settings.
5. The Database System returns the currency settings to the Data Access Layer.
6. The Data Access Layer sends the settings back to the Service Layer.
7. The Service Layer forwards the settings to the API Layer.
8. The API Layer displays the current currency settings to the user.
9. The user modifies the currency settings and submits the changes.
10. The API Layer forwards the updated settings to the Service Layer.
11. The Service Layer updates the settings in the Data Access Layer.
12. The Data Access Layer saves the updated settings in the Database System.
13. The Database System acknowledges the update to the Data Access Layer.
14. The Data Access Layer confirms the update to the Service Layer.
15. The Service Layer notifies the API Layer of the successful currency settings update.
16. The API Layer informs the user that the currency settings have been successfully updated.
```mermaid
sequenceDiagram
autonumber
participant U as User
participant API as API Layer
participant S as Service Layer
participant D as Data Access Layer
participant DB as Database System
U->>API: Request current currency settings
API->>S: Forward request
S->>D: Retrieve settings
D->>DB: Query settings
DB-->>D: Return settings
D-->>S: Send settings
S-->>API: Forward settings
API->>U: Display settings
U->>API: Submit updated settings
API->>S: Forward updates
S->>D: Update settings
D->>DB: Save updated settings
DB-->>D: Acknowledge update
D-->>S: Confirm update
S-->>API: Notify successful update
API->>U: Inform user of success
```
## Report Generation
1. The user selects the option to generate a report.
2. The API Layer receives the report generation request and forwards it to the Reporting Module in the Service Layer.
3. The Reporting Module retrieves the necessary data from the Data Access Layer.
4. The Data Access Layer queries the Database System for the required data.
5. The Database System returns the data to the Data Access Layer.
6. The Data Access Layer sends the data back to the Reporting Module.
7. The Reporting Module processes the data and generates the report.
8. The Reporting Module returns the generated report to the API Layer.
9. The API Layer delivers the report to the user for download or viewing.
```mermaid
sequenceDiagram
autonumber
participant U as User
participant API as API Layer
participant S as Service Layer
participant D as Data Access Layer
participant DB as Database System
U->>API: Request report generation
API->>S: Forward request
S->>D: Retrieve data for report
D->>DB: Query data
DB-->>D: Return data
D-->>S: Send data
S-->>API: Forward data
API->>U: Display report
```
## User Management
1. The user navigates to the user management section.
2. The API Layer retrieves the list of users from the Service Layer.
3. The Service Layer requests the user list from the Data Access Layer.
4. The Data Access Layer queries the Database System for the user information.
5. The Database System returns the user information to the Data Access Layer.
6. The Data Access Layer sends the user information back to the Service Layer.
7. The Service Layer forwards the user information to the API Layer.
8. The API Layer displays the list of users to the administrator.
9. The administrator can add, modify, or delete users as needed.
10. The API Layer forwards any user management actions to the Service Layer.
11. The Service Layer processes the actions in the Data Access Layer.
12. The Data Access Layer updates the user information in the Database System.
13. The Database System acknowledges the updates to the Data Access Layer.
14. The Data Access Layer confirms the updates to the Service Layer.
15. The Service Layer notifies the API Layer of the successful user management actions.
16. The API Layer informs the administrator of the successful user management operations.
```mermaid
sequenceDiagram
autonumber
participant U as User
participant API as API Layer
participant S as Service Layer
participant D as Data Access Layer
participant DB as Database System
U->>API: Request user information
API->>S: Forward request
S->>D: Retrieve user data
D->>DB: Query user data
DB-->>D: Return user data
D-->>S: Send user data
S-->>API: Forward user data
API->>U: Display user information
U->>API: Submit user management actions
API->>S: Forward actions
S->>D: Process user management actions
D->>DB: Update user information
DB-->>D: Acknowledge updates
D-->>S: Confirm updates
S-->>API: Notify successful actions
API->>U: Inform user of success
```

View File

@@ -0,0 +1,120 @@
# Deployment View
<!--
The deployment view describes:
1. technical infrastructure used to execute your system, with infrastructure elements like geographical locations, environments, computers, processors, channels and net topologies as well as other infrastructure elements and
2. mapping of (software) building blocks to that infrastructure elements.
Often systems are executed in different environments, e.g. development environment, test environment, production environment. In such cases you should document all relevant environments.
Especially document a deployment view if your software is executed as distributed system with more than one computer, processor, server or container or when you design and construct your own hardware processors and chips.
From a software perspective it is sufficient to capture only those elements of an infrastructure that are needed to show a deployment of your building blocks. Hardware architects can go beyond that and describe an infrastructure to any level of detail they need to capture.
_Motivation:_ Software does not run without hardware. This underlying infrastructure can and will influence a system and/or some cross-cutting concepts. Therefore, there is a need to know the infrastructure.
_Form:_ Maybe a highest level deployment diagram is already contained in section 3.2. as technical context with your own infrastructure as ONE black box. In this section one can zoom into this black box using additional deployment diagrams: UML offers deployment diagrams to express that view. Use it, probably with nested diagrams, when your infrastructure is more complex. When your (hardware) stakeholders prefer other kinds of diagrams rather than a deployment diagram, let them use any kind that is able to show nodes and channels of the infrastructure.
## Infrastructure Level 1
Description of the highest level infrastructure.
## Infrastructure Level 2
Zoom into level 1.
## Mapping of Building Blocks to Infrastructure
Describe how software building blocks are mapped to the infrastructure.
-->
## Infrastructure Overview
<!--
Describe the highest level infrastructure used to execute your system. This may include geographical locations, environments, computers, processors, channels, and network topologies.
-->
Deployment is carried out across multiple environments, including Development, Testing, Staging, and Production. Each environment is hosted on Docker or Podman containers, orchestrated using Gitea Actions for CI/CD pipelines and leveraging coolify for cloud infrastructure management.
Communication Channels and Protocols are summarized in [Section 3.2 Technical Context](03_context_and_scope.md#technical-context).
## Mapping of Building Blocks to Infrastructure
| Building Block | Infrastructure Component |
| ---------------------- | ------------------------ |
| API Layer | Docker Container |
| Service Layer | Docker Container |
| Data Access Layer | Docker Container |
| Database | Managed Database Service |
| Frontend Layer | Docker Container |
| Authentication Service | Docker Container |
| Reporting Module | Docker Container |
| Simulation Engine | Docker Container |
| User Interface | Docker Container |
| Visualization Module | Docker Container |
| Mining Algorithm | Docker Container |
| Data Preprocessing | Docker Container |
| Result Postprocessing | Docker Container |
| Notification Service | Docker Container |
| Logging Service | Docker Container |
| Monitoring Service | Docker Container |
| Caching Layer | Docker Container |
| Load Balancer | Docker Container |
| API Gateway | Docker Container |
| Message Broker | Docker Container |
| Configuration Service | Docker Container |
| Backup Service | Docker Container |
| Analytics Module | Docker Container |
| Search Service | Docker Container |
| Scheduler Service | Docker Container |
| File Storage Service | Docker Container |
```mermaid
graph LR
subgraph Infrastructure
DC[Docker/Podman Containers]
MDS[Managed Database Service]
end
subgraph BuildingBlocks
API[API Layer]
SVC[Service Layer]
DAL[Data Access Layer]
DB[Database]
FE[Frontend Layer]
end
API --> DC
SVC --> DC
DAL --> DC
DB --> MDS
FE --> DC
```
## Level 2
### Docker/Podman Containers
The Docker/Podman Containers host various building blocks of the system, including the API Layer, Service Layer, Data Access Layer, Frontend Layer, and other services such as Authentication Service, Reporting Module, and Simulation Engine. Each container is configured to ensure optimal performance and security.
### Managed Database Service
The Managed Database Service hosts the PostgreSQL database, which is used for storing and retrieving data used and generated by the system. The service is configured for high availability, backup, and recovery to ensure data integrity.
## Level 3
### Calminer Deployment Container
The Calminer Deployment Container encapsulates the entire application, including all necessary building blocks and dependencies, to ensure consistent deployment across different environments.
### Building Block to Container Mapping
| Building Block | Container Component |
| ----------------- | ------------------- |
| API Layer | Calminer Container |
| Service Layer | Calminer Container |
| Data Access Layer | Calminer Container |
| Database | Database Container |
| Frontend Layer | Calminer Container |

22
architecture/README.md Normal file
View File

@@ -0,0 +1,22 @@
# Architecture Documentation
This folder contains the architecture documentation for the Calminer project, following the arc42 template structure.
## Chapters
1. [Introduction and Goals](01_introduction_and_goals.md)
2. [Architecture Constraints](02_architecture_constraints.md)
3. [Context and Scope](03_context_and_scope.md)
4. [Solution Strategy](04_solution_strategy.md)
5. [Building Block View](05_building_block_view.md)
6. [Runtime View](06_runtime_view.md)
7. [Deployment View](07_deployment_view.md)
8. [Concepts](08_concepts.md)
9. [Architecture Decisions](09_architecture_decisions.md)
10. [Quality Requirements](10_quality_requirements.md)
11. [Technical Risks](11_technical_risks.md)
12. [Glossary](12_glossary.md)
## About
This documentation is based on the arc42 template for software architecture documentation.