Compare commits
62 Commits
1450563ce1
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 02fc5995db | |||
| 299ad7d943 | |||
| 3d0a08a8ef | |||
| 2ca7ae538f | |||
| 37edef716a | |||
| d5a94947de | |||
| 615b842b03 | |||
| 998cc2e472 | |||
| 81c06ad13b | |||
| d1c2b6da68 | |||
| 0ae0e6e7fa | |||
| bdb7c7c43a | |||
| bd77d4c43e | |||
| cc96d26b08 | |||
| 8e36f48527 | |||
| df85676fa2 | |||
| 951a653dc9 | |||
| d4421616e5 | |||
| df24a07cd8 | |||
| 6ef8816736 | |||
| 472fe1cab8 | |||
| 712c556032 | |||
| 3d32e6df74 | |||
| 78b76dc331 | |||
| acc6991341 | |||
| 96d13fc440 | |||
| fe32c32726 | |||
| da3ae822f6 | |||
| e1d74fe163 | |||
| 20de65ad01 | |||
| 3224d16197 | |||
| 8871f136d4 | |||
| 52e95e3fe0 | |||
| abd61798c7 | |||
| d484721a94 | |||
| 87efca00df | |||
| fe41a8cbee | |||
| 5371bdce3b | |||
| 80999b3659 | |||
| 8f4d01d34d | |||
| 2c6fdc03a8 | |||
| ae7727c01a | |||
| 508f0e5d40 | |||
| f9ada784db | |||
| 8c5798db43 | |||
| e5b413c79d | |||
| 1c96ae17fc | |||
| 58c2cb4490 | |||
| 17ae8d9477 | |||
| 98d59de2d1 | |||
| 3b807c0f75 | |||
| 4edadd7623 | |||
| 53d2d2ffef | |||
| f2a9f187f2 | |||
| ee45dd9526 | |||
| 5e24215ffe | |||
| 5ea568aaf6 | |||
| 05309f26b4 | |||
| 3ee4ed7e7f | |||
| ce8393a8e2 | |||
| 78b503fe43 | |||
| 48a7ed68e6 |
@@ -0,0 +1,13 @@
|
||||
# Copy this file to .env and fill in the values
|
||||
|
||||
# openrouter.ai API key
|
||||
OPENROUTER_API_KEY=
|
||||
|
||||
# JWT secret for signing tokens (generate with: python -c "import secrets; print(secrets.token_hex(32))")
|
||||
JWT_SECRET=
|
||||
|
||||
# Path to DuckDB database file
|
||||
DB_PATH=data/app.db
|
||||
|
||||
# Log level (DEBUG, INFO, WARNING, ERROR)
|
||||
LOG_LEVEL=INFO
|
||||
+6
-1
@@ -44,4 +44,9 @@ htmlcov/
|
||||
Thumbs.db
|
||||
|
||||
# instructions
|
||||
.github/instructions/
|
||||
.github/instructions/
|
||||
|
||||
# Logs and generated data
|
||||
logs/
|
||||
data/
|
||||
backend/data/
|
||||
|
||||
@@ -1,7 +1,16 @@
|
||||
# AI
|
||||
# All You Can GET AI
|
||||
|
||||
A multi-modal AI web application. Users can choose between different AI models for text generation, text-to-image, text-to-video, and image-to-video generation, powered by [openrouter.ai](https://openrouter.ai).
|
||||
|
||||
Key features:
|
||||
|
||||
- Multi-modal AI generation (text, images, videos)
|
||||
- User authentication and role-based access control
|
||||
- Admin dashboard for managing users, models, and video jobs
|
||||
- Gallery for viewing generated images and videos
|
||||
- Chat interface with message history
|
||||
- Image upload and preview functionality
|
||||
|
||||
## Components
|
||||
|
||||
| Component | Technology | Description |
|
||||
@@ -14,15 +23,15 @@ A multi-modal AI web application. Users can choose between different AI models f
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Python 3.11+
|
||||
- Python 3.12+
|
||||
- An [openrouter.ai](https://openrouter.ai) API key
|
||||
|
||||
### Setup
|
||||
|
||||
```bash
|
||||
# Clone the repo
|
||||
git clone <repo-url>
|
||||
cd 12-ai.allucanget.biz
|
||||
git clone https://git.allucanget.biz/allucanget/ai.allucanget.biz.git
|
||||
cd ai.allucanget.biz
|
||||
|
||||
# Create and activate virtual environment
|
||||
python -m venv .venv
|
||||
@@ -31,49 +40,98 @@ python -m venv .venv
|
||||
# Linux/macOS
|
||||
source .venv/bin/activate
|
||||
|
||||
# Install dependencies
|
||||
# Install core dependencies
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Copy and fill in environment variables
|
||||
# Install development dependencies
|
||||
pip install -r backend/requirements-dev.txt
|
||||
pip install -r frontend/requirements-dev.txt
|
||||
|
||||
# Copy environment variables file
|
||||
cp .env.example .env
|
||||
|
||||
# Edit .env file and add your OpenRouter API key and configure other settings
|
||||
nano .env
|
||||
```
|
||||
|
||||
### Running the backend
|
||||
### Running the application locally
|
||||
|
||||
#### Backend (FastAPI + Uvicorn)
|
||||
|
||||
```bash
|
||||
cd backend
|
||||
uvicorn app.main:app --reload --port 8000
|
||||
uvicorn app.main:app --reload --port 12015
|
||||
```
|
||||
|
||||
### Running the frontend
|
||||
#### Frontend (Flask)
|
||||
|
||||
```bash
|
||||
cd frontend
|
||||
flask --app app.main run --port 5000
|
||||
flask --app app.main run --port 12016 --debug
|
||||
```
|
||||
|
||||
### Running tests
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
pytest
|
||||
|
||||
# Run backend tests only
|
||||
pytest backend/tests/
|
||||
|
||||
# Run frontend tests only
|
||||
pytest frontend/tests/
|
||||
```
|
||||
|
||||
### Available Environment Variables
|
||||
|
||||
| Variable | Description | Default |
|
||||
| -------------------- | --------------------------- | ------------------- |
|
||||
| `OPENROUTER_API_KEY` | Your OpenRouter API key | _Required_ |
|
||||
| `ADMIN_EMAIL` | Default admin user email | `ai@allucanget.biz` |
|
||||
| `ADMIN_PASSWORD` | Default admin user password | `admin123` |
|
||||
| `DATABASE_URL` | DuckDB database path | `../data/app.db` |
|
||||
|
||||
## Default admin user
|
||||
|
||||
On first startup a default admin account is created:
|
||||
|
||||
| Field | Value |
|
||||
| -------- | ------------------- |
|
||||
| Email | `ai@allucanget.biz` |
|
||||
| Password | `admin123` |
|
||||
| Role | `admin` |
|
||||
|
||||
Override via environment variables `ADMIN_EMAIL` and `ADMIN_PASSWORD` before first run.
|
||||
|
||||
## Deployment
|
||||
|
||||
Deployed on [Coolify](https://coolify.io) using Nixpacks. See [docs/deployment/coolify.md](docs/deployment/coolify.md) for full instructions.
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
backend/ FastAPI backend
|
||||
```txt
|
||||
backend/ FastAPI backend
|
||||
app/
|
||||
routers/ API route handlers
|
||||
services/ Business logic
|
||||
models/ Pydantic models
|
||||
tests/
|
||||
frontend/ Flask frontend
|
||||
__init__.py Package initialization
|
||||
db.py Database connection and operations
|
||||
dependencies.py Dependency injection
|
||||
main.py FastAPI application entrypoint
|
||||
models/ Pydantic and database models
|
||||
routers/ API route handlers (auth, users, admin, generate, gallery)
|
||||
services/ Business logic for AI generation, users, admin, etc.
|
||||
tests/ Backend test suite
|
||||
frontend/ Flask frontend
|
||||
app/
|
||||
templates/ Jinja2 HTML templates
|
||||
static/ CSS, JS, images
|
||||
tests/
|
||||
data/ DuckDB database files (gitignored)
|
||||
docs/ Architecture documentation
|
||||
__init__.py Package initialization
|
||||
main.py Flask application entrypoint
|
||||
templates/ Jinja2 HTML templates
|
||||
static/ CSS, JS, images
|
||||
tests/ Frontend test suite
|
||||
data/ DuckDB database files, uploaded media, and generated content
|
||||
logs/ Application logs
|
||||
docs/ Architecture documentation (arc42 template)
|
||||
nginx/ Nginx configuration for Coolify deployment
|
||||
```
|
||||
|
||||
## Documentation
|
||||
|
||||
@@ -0,0 +1,22 @@
|
||||
FROM python:3.12-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install system dependencies
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
gcc \
|
||||
curl \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy requirements and install Python dependencies
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy application code
|
||||
COPY . .
|
||||
|
||||
# Expose port
|
||||
EXPOSE 12015
|
||||
|
||||
# Run the application
|
||||
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "12015"]
|
||||
@@ -0,0 +1,149 @@
|
||||
"""DuckDB singleton connection with asyncio write lock and schema migrations."""
|
||||
import asyncio
|
||||
import os
|
||||
import duckdb
|
||||
|
||||
_conn: duckdb.DuckDBPyConnection | None = None
|
||||
_write_lock = asyncio.Lock()
|
||||
|
||||
|
||||
def get_db_path() -> str:
|
||||
return os.getenv("DB_PATH", "data/app.db")
|
||||
|
||||
|
||||
def init_db(path: str | None = None) -> duckdb.DuckDBPyConnection:
|
||||
"""Open (or reuse) the DuckDB connection and run schema migrations."""
|
||||
global _conn
|
||||
if _conn is not None:
|
||||
return _conn
|
||||
db_path = path or get_db_path()
|
||||
if db_path != ":memory:":
|
||||
os.makedirs(os.path.dirname(db_path), exist_ok=True)
|
||||
_conn = duckdb.connect(db_path)
|
||||
_run_migrations(_conn)
|
||||
return _conn
|
||||
|
||||
|
||||
def get_conn() -> duckdb.DuckDBPyConnection:
|
||||
"""Return the active connection; raises if not yet initialised."""
|
||||
if _conn is None:
|
||||
raise RuntimeError("Database not initialised. Call init_db() first.")
|
||||
return _conn
|
||||
|
||||
|
||||
def close_db() -> None:
|
||||
"""Close the connection (called on app shutdown)."""
|
||||
global _conn
|
||||
if _conn is not None:
|
||||
_conn.close()
|
||||
_conn = None
|
||||
|
||||
|
||||
def get_write_lock() -> asyncio.Lock:
|
||||
"""Return the asyncio lock that serialises write operations."""
|
||||
return _write_lock
|
||||
|
||||
|
||||
def _run_migrations(conn: duckdb.DuckDBPyConnection) -> None:
|
||||
conn.execute("""
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id UUID DEFAULT uuid() PRIMARY KEY,
|
||||
email VARCHAR NOT NULL UNIQUE,
|
||||
password_hash VARCHAR NOT NULL,
|
||||
role VARCHAR DEFAULT 'user',
|
||||
created_at TIMESTAMP DEFAULT now(),
|
||||
updated_at TIMESTAMP DEFAULT now()
|
||||
)
|
||||
""")
|
||||
conn.execute("""
|
||||
CREATE TABLE IF NOT EXISTS refresh_tokens (
|
||||
jti UUID DEFAULT uuid() PRIMARY KEY,
|
||||
user_id UUID NOT NULL,
|
||||
issued_at TIMESTAMP DEFAULT now(),
|
||||
expires_at TIMESTAMP NOT NULL,
|
||||
revoked BOOLEAN DEFAULT false
|
||||
)
|
||||
""")
|
||||
conn.execute("""
|
||||
CREATE TABLE IF NOT EXISTS uploaded_images (
|
||||
id UUID DEFAULT uuid() PRIMARY KEY,
|
||||
user_id UUID NOT NULL,
|
||||
filename VARCHAR NOT NULL,
|
||||
content_type VARCHAR NOT NULL,
|
||||
file_path VARCHAR NOT NULL,
|
||||
size_bytes BIGINT NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT now()
|
||||
)
|
||||
""")
|
||||
conn.execute("""
|
||||
CREATE TABLE IF NOT EXISTS models_cache (
|
||||
id UUID DEFAULT uuid() PRIMARY KEY,
|
||||
model_id VARCHAR NOT NULL UNIQUE,
|
||||
name VARCHAR NOT NULL,
|
||||
modality VARCHAR NOT NULL,
|
||||
context_length BIGINT,
|
||||
pricing JSON,
|
||||
fetched_at TIMESTAMP NOT NULL
|
||||
)
|
||||
""")
|
||||
conn.execute("""
|
||||
CREATE TABLE IF NOT EXISTS generated_images (
|
||||
id UUID DEFAULT uuid() PRIMARY KEY,
|
||||
user_id UUID NOT NULL,
|
||||
model_id VARCHAR NOT NULL,
|
||||
prompt VARCHAR NOT NULL,
|
||||
image_data VARCHAR NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT now()
|
||||
)
|
||||
""")
|
||||
conn.execute("""
|
||||
CREATE TABLE IF NOT EXISTS generated_videos (
|
||||
id UUID DEFAULT uuid() PRIMARY KEY,
|
||||
user_id UUID NOT NULL,
|
||||
job_id VARCHAR NOT NULL,
|
||||
model_id VARCHAR NOT NULL,
|
||||
prompt VARCHAR NOT NULL,
|
||||
polling_url VARCHAR,
|
||||
status VARCHAR NOT NULL DEFAULT 'pending',
|
||||
video_url VARCHAR,
|
||||
created_at TIMESTAMP DEFAULT now(),
|
||||
updated_at TIMESTAMP DEFAULT now()
|
||||
)
|
||||
""")
|
||||
# Migration: add output_modalities column if absent (stores JSON array string)
|
||||
conn.execute("""
|
||||
ALTER TABLE models_cache ADD COLUMN IF NOT EXISTS output_modalities VARCHAR
|
||||
""")
|
||||
# Migration: add video job request params + generation type
|
||||
conn.execute("""
|
||||
ALTER TABLE generated_videos ADD COLUMN IF NOT EXISTS request_params VARCHAR
|
||||
""")
|
||||
conn.execute("""
|
||||
ALTER TABLE generated_videos ADD COLUMN IF NOT EXISTS generation_type VARCHAR DEFAULT 'text_to_video'
|
||||
""")
|
||||
conn.execute("""
|
||||
ALTER TABLE generated_videos ADD COLUMN IF NOT EXISTS error VARCHAR
|
||||
""")
|
||||
_seed_admin(conn)
|
||||
|
||||
|
||||
def _seed_admin(conn: duckdb.DuckDBPyConnection) -> None:
|
||||
"""Insert the default admin user if it doesn't already exist."""
|
||||
from passlib.context import CryptContext
|
||||
_pwd = CryptContext(schemes=["bcrypt"], deprecated="auto")
|
||||
|
||||
email = os.getenv("ADMIN_EMAIL", "ai@allucanget.biz")
|
||||
password = os.getenv("ADMIN_PASSWORD", "admin123")
|
||||
|
||||
existing = conn.execute(
|
||||
"SELECT id FROM users WHERE email = ?", [email]
|
||||
).fetchone()
|
||||
if existing is None:
|
||||
password_hash = _pwd.hash(password)
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO users (email, password_hash, role)
|
||||
VALUES (?, ?, 'admin')
|
||||
""",
|
||||
[email, password_hash],
|
||||
)
|
||||
@@ -0,0 +1,42 @@
|
||||
"""FastAPI dependencies (e.g. authenticated user extraction)."""
|
||||
from fastapi import Depends, HTTPException, status
|
||||
from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer
|
||||
from jose import JWTError
|
||||
|
||||
from .services.auth import decode_token
|
||||
|
||||
_bearer = HTTPBearer()
|
||||
|
||||
|
||||
async def get_current_user(
|
||||
credentials: HTTPAuthorizationCredentials = Depends(_bearer),
|
||||
) -> dict:
|
||||
"""Extract and validate the Bearer JWT. Returns the token payload."""
|
||||
credentials_error = HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Could not validate credentials.",
|
||||
headers={"WWW-Authenticate": "Bearer"},
|
||||
)
|
||||
try:
|
||||
payload = decode_token(credentials.credentials)
|
||||
except JWTError:
|
||||
raise credentials_error
|
||||
|
||||
if payload.get("type") != "access":
|
||||
raise credentials_error
|
||||
|
||||
user_id: str | None = payload.get("sub")
|
||||
if user_id is None:
|
||||
raise credentials_error
|
||||
|
||||
return {"id": user_id, "email": payload.get("email"), "role": payload.get("role")}
|
||||
|
||||
|
||||
async def require_admin(current_user: dict = Depends(get_current_user)) -> dict:
|
||||
"""Raise 403 if the authenticated user is not an admin."""
|
||||
if current_user.get("role") != "admin":
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail="Admin access required.",
|
||||
)
|
||||
return current_user
|
||||
@@ -0,0 +1,60 @@
|
||||
from .routers import auth
|
||||
from .routers import users
|
||||
from .routers import admin
|
||||
from .routers import ai
|
||||
from .routers import generate
|
||||
from .routers import images
|
||||
from .routers import models
|
||||
from .db import close_db, get_conn, get_write_lock, init_db
|
||||
from .services.video_worker import run_worker
|
||||
import asyncio
|
||||
import os
|
||||
from contextlib import asynccontextmanager
|
||||
|
||||
from fastapi import FastAPI
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from dotenv import load_dotenv
|
||||
|
||||
load_dotenv()
|
||||
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
init_db()
|
||||
worker_task = asyncio.create_task(run_worker(get_conn(), get_write_lock()))
|
||||
yield
|
||||
worker_task.cancel()
|
||||
try:
|
||||
await worker_task
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
close_db()
|
||||
|
||||
|
||||
app = FastAPI(
|
||||
title="All You Can GET AI Biz API",
|
||||
description="Multi-modal AI generation API powered by openrouter.ai",
|
||||
version="0.1.0",
|
||||
lifespan=lifespan,
|
||||
)
|
||||
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=[os.getenv("CORS_ORIGINS", "http://localhost:12016")],
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
app.include_router(auth.router)
|
||||
app.include_router(users.router)
|
||||
app.include_router(admin.router)
|
||||
app.include_router(ai.router)
|
||||
app.include_router(generate.router)
|
||||
app.include_router(images.router)
|
||||
app.include_router(models.router)
|
||||
|
||||
|
||||
@app.get("/health", tags=["health"])
|
||||
async def health() -> dict:
|
||||
return {"status": "ok"}
|
||||
@@ -0,0 +1,102 @@
|
||||
"""Pydantic schemas for AI generation endpoints."""
|
||||
from typing import Any
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
class ChatMessage(BaseModel):
|
||||
role: str # "user" | "assistant" | "system"
|
||||
content: str
|
||||
|
||||
|
||||
class ChatRequest(BaseModel):
|
||||
model: str
|
||||
messages: list[ChatMessage]
|
||||
temperature: float = 0.7
|
||||
max_tokens: int = 1024
|
||||
|
||||
|
||||
class ChatResponse(BaseModel):
|
||||
id: str
|
||||
model: str
|
||||
content: str
|
||||
usage: dict[str, Any] | None = None
|
||||
|
||||
|
||||
class ModelInfo(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
context_length: int | None = None
|
||||
pricing: dict[str, Any] | None = None
|
||||
|
||||
|
||||
# --- Text generation ---
|
||||
|
||||
class TextRequest(BaseModel):
|
||||
model: str
|
||||
prompt: str = ""
|
||||
system_prompt: str | None = None
|
||||
messages: list[ChatMessage] | None = None
|
||||
temperature: float = 0.7
|
||||
max_tokens: int = 1024
|
||||
|
||||
|
||||
class TextResponse(BaseModel):
|
||||
id: str
|
||||
model: str
|
||||
content: str
|
||||
usage: dict[str, Any] | None = None
|
||||
|
||||
|
||||
# --- Image generation ---
|
||||
|
||||
class ImageRequest(BaseModel):
|
||||
model: str
|
||||
prompt: str
|
||||
n: int = 1
|
||||
size: str = "1024x1024"
|
||||
aspect_ratio: str | None = None # e.g. "1:1", "16:9", "9:16"
|
||||
image_size: str | None = None # e.g. "0.5K", "1K", "2K", "4K"
|
||||
|
||||
|
||||
class ImageResult(BaseModel):
|
||||
url: str | None = None
|
||||
b64_json: str | None = None
|
||||
revised_prompt: str | None = None
|
||||
image_id: str | None = None # UUID of stored row in generated_images
|
||||
|
||||
|
||||
class ImageResponse(BaseModel):
|
||||
id: str
|
||||
model: str
|
||||
images: list[ImageResult]
|
||||
|
||||
|
||||
# --- Video generation ---
|
||||
|
||||
class VideoRequest(BaseModel):
|
||||
model: str
|
||||
prompt: str
|
||||
duration_seconds: int | None = None
|
||||
aspect_ratio: str = "16:9"
|
||||
resolution: str | None = None # e.g. "480p", "720p", "1080p"
|
||||
|
||||
|
||||
class VideoFromImageRequest(BaseModel):
|
||||
model: str
|
||||
image_url: str
|
||||
prompt: str
|
||||
duration_seconds: int | None = None
|
||||
aspect_ratio: str = "16:9"
|
||||
resolution: str | None = None # e.g. "480p", "720p", "1080p"
|
||||
|
||||
|
||||
class VideoResponse(BaseModel):
|
||||
id: str # This is the job_id from the provider
|
||||
db_id: str | None = None # This is the UUID from our generated_videos table
|
||||
model: str
|
||||
status: str # "queued" | "processing" | "completed" | "failed"
|
||||
polling_url: str | None = None
|
||||
video_urls: list[str] | None = None
|
||||
video_url: str | None = None # first entry of video_urls for convenience
|
||||
error: str | None = None
|
||||
metadata: dict[str, Any] | None = None
|
||||
@@ -0,0 +1,22 @@
|
||||
"""Pydantic schemas for authentication endpoints."""
|
||||
from pydantic import BaseModel, EmailStr
|
||||
|
||||
|
||||
class RegisterRequest(BaseModel):
|
||||
email: EmailStr
|
||||
password: str
|
||||
|
||||
|
||||
class LoginRequest(BaseModel):
|
||||
email: EmailStr
|
||||
password: str
|
||||
|
||||
|
||||
class TokenResponse(BaseModel):
|
||||
access_token: str
|
||||
refresh_token: str
|
||||
token_type: str = "bearer"
|
||||
|
||||
|
||||
class RefreshRequest(BaseModel):
|
||||
refresh_token: str
|
||||
@@ -0,0 +1,17 @@
|
||||
"""Pydantic schemas for user management endpoints."""
|
||||
from pydantic import BaseModel, EmailStr
|
||||
|
||||
|
||||
class UserResponse(BaseModel):
|
||||
id: str
|
||||
email: str
|
||||
role: str
|
||||
|
||||
|
||||
class UpdateUserRequest(BaseModel):
|
||||
email: EmailStr | None = None
|
||||
password: str | None = None
|
||||
|
||||
|
||||
class SetRoleRequest(BaseModel):
|
||||
role: str
|
||||
@@ -0,0 +1,227 @@
|
||||
"""Admin router: operational endpoints for application management."""
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from typing import Any
|
||||
|
||||
from fastapi import APIRouter, Depends
|
||||
|
||||
from ..db import get_conn, get_write_lock
|
||||
from ..dependencies import require_admin
|
||||
from ..services import models as models_service
|
||||
from ..services.models import mark_timed_out_video_jobs
|
||||
|
||||
router = APIRouter(prefix="/admin", tags=["admin"])
|
||||
|
||||
|
||||
@router.get("/stats")
|
||||
async def get_stats(_: dict = Depends(require_admin)) -> dict:
|
||||
"""Return aggregate statistics: user counts and token counts."""
|
||||
conn = get_conn()
|
||||
sql_user_count = "SELECT COUNT(*) FROM users"
|
||||
sql_user_counts = "SELECT role, COUNT(*) FROM users GROUP BY role ORDER BY role"
|
||||
sql_token_count = "SELECT COUNT(*) FROM refresh_tokens"
|
||||
sql_tokens_active = "SELECT COUNT(*) FROM refresh_tokens WHERE revoked = false AND expires_at > ?"
|
||||
now = datetime.now(timezone.utc)
|
||||
|
||||
total_users_row = conn.execute(sql_user_count).fetchone()
|
||||
total_users = total_users_row[0] if total_users_row else 0
|
||||
|
||||
users_by_role = conn.execute(sql_user_counts).fetchall()
|
||||
|
||||
total_tokens_row = conn.execute(sql_token_count).fetchone()
|
||||
total_tokens = total_tokens_row[0] if total_tokens_row else 0
|
||||
|
||||
active_tokens_row = conn.execute(sql_tokens_active, [now]).fetchone()
|
||||
active_tokens = active_tokens_row[0] if active_tokens_row else 0
|
||||
|
||||
return {
|
||||
"users": {
|
||||
"total": total_users,
|
||||
"by_role": {row[0]: row[1] for row in users_by_role},
|
||||
},
|
||||
"refresh_tokens": {
|
||||
"total": total_tokens,
|
||||
"active": active_tokens,
|
||||
"revoked_or_expired": total_tokens - active_tokens,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@router.get("/health/db")
|
||||
async def db_health(_: dict = Depends(require_admin)) -> dict:
|
||||
"""Verify DuckDB is reachable."""
|
||||
conn = get_conn()
|
||||
result_row = conn.execute("SELECT 1").fetchone()
|
||||
result = result_row[0] if result_row else 0
|
||||
return {"status": "ok" if result == 1 else "error"}
|
||||
|
||||
|
||||
@router.post("/tokens/purge", status_code=200)
|
||||
async def purge_tokens(_: dict = Depends(require_admin)) -> dict:
|
||||
"""Delete all expired or revoked refresh tokens. Returns count removed."""
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
now = datetime.now(timezone.utc)
|
||||
sql_count = "SELECT COUNT(*) FROM refresh_tokens"
|
||||
sql_delete = "DELETE FROM refresh_tokens WHERE revoked = true OR expires_at <= ?"
|
||||
async with lock:
|
||||
before_row = conn.execute(sql_count).fetchone()
|
||||
before = before_row[0] if before_row else 0
|
||||
|
||||
conn.execute(sql_delete, [now])
|
||||
|
||||
after_row = conn.execute(sql_count).fetchone()
|
||||
after = after_row[0] if after_row else 0
|
||||
|
||||
return {"deleted": before - after, "remaining": after}
|
||||
|
||||
|
||||
@router.get("/models/status")
|
||||
async def get_model_status(_: dict = Depends(require_admin)) -> dict[str, Any]:
|
||||
"""Return model cache status: last update time and model count."""
|
||||
conn = get_conn()
|
||||
return models_service.get_cache_status(conn)
|
||||
|
||||
|
||||
@router.get("/models")
|
||||
async def get_all_models(_: dict = Depends(require_admin)) -> list[dict[str, Any]]:
|
||||
"""Return all cached models."""
|
||||
conn = get_conn()
|
||||
return models_service.get_cached_models(conn)
|
||||
|
||||
|
||||
@router.post("/models/refresh", status_code=200)
|
||||
async def refresh_models(
|
||||
_: dict = Depends(require_admin),
|
||||
) -> dict[str, str | int | None]:
|
||||
"""Force a refresh of the model cache from OpenRouter."""
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
async with lock:
|
||||
count = await models_service.refresh_models_cache(conn)
|
||||
status = models_service.get_cache_status(conn)
|
||||
return {
|
||||
"status": "ok",
|
||||
"refreshed": count,
|
||||
"total_models": status.get("model_count"),
|
||||
"last_updated": status.get("last_updated"),
|
||||
}
|
||||
|
||||
|
||||
@router.get("/videos")
|
||||
async def admin_list_video_jobs(_: dict = Depends(require_admin)) -> list[dict[str, Any]]:
|
||||
"""Return all video generation jobs across all users."""
|
||||
conn = get_conn()
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT
|
||||
v.id, v.job_id, v.user_id, u.email, v.model_id, v.prompt,
|
||||
v.status, v.video_url, v.created_at, v.updated_at
|
||||
FROM generated_videos v
|
||||
LEFT JOIN users u ON v.user_id = u.id
|
||||
ORDER BY v.created_at DESC
|
||||
"""
|
||||
).fetchall()
|
||||
return [
|
||||
{
|
||||
"id": str(row[0]),
|
||||
"job_id": row[1],
|
||||
"user_id": str(row[2]),
|
||||
"user_email": row[3],
|
||||
"model_id": row[4],
|
||||
"prompt": row[5],
|
||||
"status": row[6],
|
||||
"video_url": row[7],
|
||||
"created_at": row[8].isoformat() if row[8] else None,
|
||||
"updated_at": row[9].isoformat() if row[9] else None,
|
||||
}
|
||||
for row in rows
|
||||
]
|
||||
|
||||
|
||||
@router.post("/videos/{job_id}/cancel", status_code=200)
|
||||
async def admin_cancel_video_job(job_id: str, _: dict = Depends(require_admin)) -> dict[str, str]:
|
||||
"""Mark a video job as 'cancelled'. Does not stop the provider job."""
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
now = datetime.now(timezone.utc)
|
||||
async with lock:
|
||||
conn.execute(
|
||||
"UPDATE generated_videos SET status = 'cancelled', updated_at = ? WHERE id = ?", [
|
||||
now, job_id]
|
||||
)
|
||||
return {"status": "ok", "job_id": job_id}
|
||||
|
||||
|
||||
@router.post("/videos/purge", status_code=200)
|
||||
async def admin_purge_video_jobs(_: dict = Depends(require_admin)) -> dict[str, Any]:
|
||||
"""Delete all completed, failed, or cancelled jobs older than 30 days."""
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
thirty_days_ago = datetime.now(
|
||||
timezone.utc) - timedelta(days=30)
|
||||
|
||||
sql_count = "SELECT COUNT(*) FROM generated_videos"
|
||||
sql_delete = """
|
||||
DELETE FROM generated_videos
|
||||
WHERE status IN ('completed', 'failed', 'cancelled')
|
||||
AND updated_at < ?
|
||||
"""
|
||||
|
||||
async with lock:
|
||||
before_row = conn.execute(sql_count).fetchone()
|
||||
before = before_row[0] if before_row else 0
|
||||
|
||||
conn.execute(sql_delete, [thirty_days_ago])
|
||||
|
||||
after_row = conn.execute(sql_count).fetchone()
|
||||
after = after_row[0] if after_row else 0
|
||||
|
||||
return {"deleted": before - after, "remaining": after}
|
||||
|
||||
|
||||
@router.post("/videos/timed-out", status_code=200)
|
||||
async def admin_mark_timed_out(_: dict = Depends(require_admin)) -> dict[str, int]:
|
||||
"""Mark video jobs that have been in 'queued' or 'processing' status for too long as 'failed'."""
|
||||
conn = get_conn()
|
||||
count = mark_timed_out_video_jobs(conn, timeout_minutes=120)
|
||||
return {"timed_out": count}
|
||||
|
||||
|
||||
@router.post("/videos/{job_id}/retry", status_code=200)
|
||||
async def admin_retry_video_job(job_id: str, _: dict = Depends(require_admin)) -> dict[str, str]:
|
||||
"""Reset a failed or cancelled video job back to 'queued' for reprocessing."""
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
now = datetime.now(timezone.utc)
|
||||
async with lock:
|
||||
row = conn.execute(
|
||||
"SELECT status FROM generated_videos WHERE id = ?", [job_id]
|
||||
).fetchone()
|
||||
if row is None:
|
||||
from fastapi import HTTPException
|
||||
raise HTTPException(status_code=404, detail="Job not found")
|
||||
if row[0] not in ("failed", "cancelled"):
|
||||
from fastapi import HTTPException
|
||||
raise HTTPException(
|
||||
status_code=400, detail=f"Cannot retry job with status '{row[0]}'")
|
||||
conn.execute(
|
||||
"UPDATE generated_videos SET status = 'queued', updated_at = ? WHERE id = ?",
|
||||
[now, job_id],
|
||||
)
|
||||
return {"status": "ok", "job_id": job_id}
|
||||
|
||||
|
||||
@router.delete("/videos/{job_id}", status_code=200)
|
||||
async def admin_delete_video_job(job_id: str, _: dict = Depends(require_admin)) -> dict[str, str]:
|
||||
"""Permanently delete a video job record."""
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
async with lock:
|
||||
row = conn.execute(
|
||||
"SELECT id FROM generated_videos WHERE id = ?", [job_id]
|
||||
).fetchone()
|
||||
if row is None:
|
||||
from fastapi import HTTPException
|
||||
raise HTTPException(status_code=404, detail="Job not found")
|
||||
conn.execute("DELETE FROM generated_videos WHERE id = ?", [job_id])
|
||||
return {"status": "ok", "job_id": job_id}
|
||||
@@ -0,0 +1,63 @@
|
||||
"""AI router: model listing and chat completions via OpenRouter."""
|
||||
from fastapi import APIRouter, Depends, HTTPException, status
|
||||
|
||||
from ..dependencies import get_current_user
|
||||
from ..models.ai import ChatRequest, ChatResponse, ModelInfo
|
||||
from ..services import openrouter
|
||||
|
||||
router = APIRouter(prefix="/ai", tags=["ai"])
|
||||
|
||||
|
||||
@router.get("/models", response_model=list[ModelInfo])
|
||||
async def get_models(_: dict = Depends(get_current_user)) -> list[ModelInfo]:
|
||||
"""List available AI models from OpenRouter."""
|
||||
try:
|
||||
raw = await openrouter.list_models()
|
||||
except Exception as exc:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"OpenRouter error: {exc}",
|
||||
)
|
||||
return [
|
||||
ModelInfo(
|
||||
id=m.get("id", ""),
|
||||
name=m.get("name", m.get("id", "")),
|
||||
context_length=m.get("context_length"),
|
||||
pricing=m.get("pricing"),
|
||||
)
|
||||
for m in raw
|
||||
]
|
||||
|
||||
|
||||
@router.post("/chat", response_model=ChatResponse)
|
||||
async def chat(
|
||||
body: ChatRequest,
|
||||
_: dict = Depends(get_current_user),
|
||||
) -> ChatResponse:
|
||||
"""Send a chat completion request through OpenRouter."""
|
||||
try:
|
||||
result = await openrouter.chat_completion(
|
||||
model=body.model,
|
||||
messages=[m.model_dump() for m in body.messages],
|
||||
temperature=body.temperature,
|
||||
max_tokens=body.max_tokens,
|
||||
)
|
||||
except Exception as exc:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"OpenRouter error: {exc}",
|
||||
)
|
||||
|
||||
try:
|
||||
choice = result["choices"][0]
|
||||
return ChatResponse(
|
||||
id=result["id"],
|
||||
model=result.get("model", body.model),
|
||||
content=choice["message"]["content"],
|
||||
usage=result.get("usage"),
|
||||
)
|
||||
except (KeyError, IndexError) as exc:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"Unexpected response format from OpenRouter: {exc}",
|
||||
)
|
||||
@@ -0,0 +1,97 @@
|
||||
"""Auth router: register, login, refresh, logout."""
|
||||
import uuid
|
||||
|
||||
from fastapi import APIRouter, HTTPException, status
|
||||
from jose import JWTError
|
||||
|
||||
from ..models.auth import LoginRequest, RefreshRequest, RegisterRequest, TokenResponse
|
||||
from ..services.auth import (
|
||||
authenticate_user,
|
||||
create_access_token,
|
||||
create_refresh_token,
|
||||
decode_token,
|
||||
register_user,
|
||||
revoke_refresh_token,
|
||||
store_refresh_token,
|
||||
validate_refresh_token_jti,
|
||||
)
|
||||
|
||||
router = APIRouter(prefix="/auth", tags=["auth"])
|
||||
|
||||
|
||||
@router.post("/register", status_code=status.HTTP_201_CREATED)
|
||||
async def register(body: RegisterRequest) -> dict:
|
||||
try:
|
||||
user = await register_user(body.email, body.password)
|
||||
except ValueError as exc:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_409_CONFLICT, detail=str(exc))
|
||||
return {"id": user["id"], "email": user["email"], "role": user["role"]}
|
||||
|
||||
|
||||
@router.post("/login", response_model=TokenResponse)
|
||||
async def login(body: LoginRequest) -> TokenResponse:
|
||||
user = await authenticate_user(body.email, body.password)
|
||||
if user is None:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Invalid credentials.",
|
||||
headers={"WWW-Authenticate": "Bearer"},
|
||||
)
|
||||
jti = str(uuid.uuid4())
|
||||
await store_refresh_token(user["id"], jti)
|
||||
return TokenResponse(
|
||||
access_token=create_access_token(
|
||||
user["id"], user["email"], user["role"]),
|
||||
refresh_token=create_refresh_token(user["id"], jti),
|
||||
)
|
||||
|
||||
|
||||
@router.post("/refresh", response_model=TokenResponse)
|
||||
async def refresh(body: RefreshRequest) -> TokenResponse:
|
||||
credentials_error = HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Invalid or expired refresh token.",
|
||||
headers={"WWW-Authenticate": "Bearer"},
|
||||
)
|
||||
try:
|
||||
payload = decode_token(body.refresh_token)
|
||||
except JWTError:
|
||||
raise credentials_error
|
||||
|
||||
if payload.get("type") != "refresh":
|
||||
raise credentials_error
|
||||
|
||||
user_id: str = payload.get("sub", "")
|
||||
jti: str = payload.get("jti", "")
|
||||
|
||||
if not await validate_refresh_token_jti(jti, user_id):
|
||||
raise credentials_error
|
||||
|
||||
# Rotate: revoke old JTI, issue new pair
|
||||
await revoke_refresh_token(jti)
|
||||
new_jti = str(uuid.uuid4())
|
||||
await store_refresh_token(user_id, new_jti)
|
||||
|
||||
from ..db import get_conn
|
||||
conn = get_conn()
|
||||
sql_fetch = "SELECT email, role FROM users WHERE id = ?"
|
||||
row = conn.execute(sql_fetch, [user_id]).fetchone()
|
||||
if row is None:
|
||||
raise credentials_error
|
||||
|
||||
return TokenResponse(
|
||||
access_token=create_access_token(user_id, row[0], row[1]),
|
||||
refresh_token=create_refresh_token(user_id, new_jti),
|
||||
)
|
||||
|
||||
|
||||
@router.post("/logout", status_code=status.HTTP_204_NO_CONTENT)
|
||||
async def logout(body: RefreshRequest) -> None:
|
||||
try:
|
||||
payload = decode_token(body.refresh_token)
|
||||
except JWTError:
|
||||
return # Already invalid — treat as success
|
||||
jti = payload.get("jti", "")
|
||||
if jti:
|
||||
await revoke_refresh_token(jti)
|
||||
@@ -0,0 +1,403 @@
|
||||
"""Generate router: text, image, video, and image-to-video generation."""
|
||||
import json
|
||||
from datetime import datetime, timezone
|
||||
|
||||
import httpx
|
||||
from fastapi import APIRouter, Depends, HTTPException, status
|
||||
|
||||
from ..db import get_conn, get_write_lock
|
||||
from ..dependencies import get_current_user
|
||||
from ..models.ai import (
|
||||
ImageRequest,
|
||||
ImageResponse,
|
||||
ImageResult,
|
||||
TextRequest,
|
||||
TextResponse,
|
||||
VideoFromImageRequest,
|
||||
VideoRequest,
|
||||
VideoResponse,
|
||||
)
|
||||
from ..services import openrouter
|
||||
from ..services.models import get_model_output_modalities
|
||||
|
||||
router = APIRouter(prefix="/generate", tags=["generate"])
|
||||
|
||||
|
||||
@router.post("/text", response_model=TextResponse)
|
||||
async def generate_text(
|
||||
body: TextRequest,
|
||||
_: dict = Depends(get_current_user),
|
||||
) -> TextResponse:
|
||||
"""Generate text from a prompt using a chat model."""
|
||||
if body.messages:
|
||||
messages = [{"role": m.role, "content": m.content}
|
||||
for m in body.messages]
|
||||
if body.system_prompt and (not messages or messages[0]["role"] != "system"):
|
||||
messages.insert(
|
||||
0, {"role": "system", "content": body.system_prompt})
|
||||
else:
|
||||
messages = []
|
||||
if body.system_prompt:
|
||||
messages.append({"role": "system", "content": body.system_prompt})
|
||||
messages.append({"role": "user", "content": body.prompt})
|
||||
|
||||
try:
|
||||
result = await openrouter.chat_completion(
|
||||
model=body.model,
|
||||
messages=messages,
|
||||
temperature=body.temperature,
|
||||
max_tokens=body.max_tokens,
|
||||
)
|
||||
except Exception as exc:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_502_BAD_GATEWAY, detail=f"OpenRouter error: {exc}")
|
||||
|
||||
try:
|
||||
choice = result["choices"][0]
|
||||
return TextResponse(
|
||||
id=result["id"],
|
||||
model=result.get("model", body.model),
|
||||
content=choice["message"]["content"],
|
||||
usage=result.get("usage"),
|
||||
)
|
||||
except (KeyError, IndexError) as exc:
|
||||
raise HTTPException(status_code=status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"Unexpected response format: {exc}")
|
||||
|
||||
|
||||
@router.post("/image", response_model=ImageResponse)
|
||||
async def generate_image(
|
||||
body: ImageRequest,
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> ImageResponse:
|
||||
"""Generate images from a prompt using the chat completions endpoint.
|
||||
|
||||
All OpenRouter image models use /chat/completions with a modalities param.
|
||||
Models that output only images use ["image"]; those that also output text
|
||||
use ["image", "text"]. We look this up from the model cache; default to
|
||||
["image", "text"] when the model is not yet cached.
|
||||
"""
|
||||
# Determine modalities from cache; default ["image", "text"] works for most models
|
||||
try:
|
||||
conn = get_conn()
|
||||
cached_modalities = get_model_output_modalities(conn, body.model)
|
||||
except Exception:
|
||||
cached_modalities = []
|
||||
|
||||
if cached_modalities:
|
||||
# If cache says model only outputs image (no text), use ["image"]
|
||||
modalities = ["image"] if set(cached_modalities) == {
|
||||
"image"} else ["image", "text"]
|
||||
else:
|
||||
# Safe default: ["image", "text"]; works for Gemini, GPT-image etc.
|
||||
# For image-only models that fail with this, the error surfaces to the user.
|
||||
modalities = ["image", "text"]
|
||||
|
||||
image_config: dict = {}
|
||||
if body.aspect_ratio:
|
||||
image_config["aspect_ratio"] = body.aspect_ratio
|
||||
if body.image_size:
|
||||
image_config["image_size"] = body.image_size
|
||||
|
||||
try:
|
||||
result = await openrouter.generate_image_chat(
|
||||
model=body.model,
|
||||
prompt=body.prompt,
|
||||
modalities=modalities,
|
||||
image_config=image_config if image_config else None,
|
||||
)
|
||||
except Exception as exc:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_502_BAD_GATEWAY, detail=f"OpenRouter error: {exc}")
|
||||
|
||||
try:
|
||||
message = result.get("choices", [{}])[0].get("message", {})
|
||||
images = []
|
||||
for item in message.get("images", []):
|
||||
img_url = item.get("image_url", {}).get("url")
|
||||
images.append(ImageResult(
|
||||
url=img_url,
|
||||
b64_json=None,
|
||||
revised_prompt=message.get("content") or None,
|
||||
))
|
||||
if not images:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_502_BAD_GATEWAY,
|
||||
detail="No images returned by model. Verify the model supports image generation.",
|
||||
)
|
||||
|
||||
# Persist each image to DB
|
||||
user_id = current_user.get("id") or current_user.get("sub")
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
stored: list[ImageResult] = []
|
||||
sql_insert = "INSERT INTO generated_images (user_id, model_id, prompt, image_data, created_at) VALUES (?, ?, ?, ?, ?) RETURNING id"
|
||||
async with get_write_lock():
|
||||
conn = get_conn()
|
||||
for img in images:
|
||||
if img.url:
|
||||
row = conn.execute(
|
||||
sql_insert, [user_id, body.model, body.prompt, img.url, now],).fetchone()
|
||||
image_id = str(row[0]) if row else None
|
||||
else:
|
||||
image_id = None
|
||||
stored.append(ImageResult(
|
||||
url=img.url,
|
||||
b64_json=img.b64_json,
|
||||
revised_prompt=img.revised_prompt,
|
||||
image_id=image_id,
|
||||
))
|
||||
|
||||
return ImageResponse(
|
||||
id=result.get("id", ""),
|
||||
model=result.get("model", body.model),
|
||||
images=stored,
|
||||
)
|
||||
except HTTPException:
|
||||
raise
|
||||
except (KeyError, TypeError) as exc:
|
||||
raise HTTPException(status_code=status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"Unexpected response format: {exc}")
|
||||
|
||||
|
||||
@router.get("/images")
|
||||
async def list_generated_images(
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> list[dict]:
|
||||
"""Return all generated images for the current user, newest first."""
|
||||
user_id = current_user.get("id") or current_user.get("sub")
|
||||
conn = get_conn()
|
||||
sql_fetch = "SELECT id, model_id, prompt, image_data, created_at FROM generated_images WHERE user_id = ? ORDER BY created_at DESC"
|
||||
rows = conn.execute(sql_fetch, [user_id]).fetchall()
|
||||
return [
|
||||
{
|
||||
"id": str(r[0]),
|
||||
"model_id": r[1],
|
||||
"prompt": r[2],
|
||||
"image_data": r[3],
|
||||
"created_at": r[4].isoformat() if r[4] else None,
|
||||
}
|
||||
for r in rows
|
||||
]
|
||||
|
||||
|
||||
@router.get("/images/{image_id}")
|
||||
async def get_generated_image(
|
||||
image_id: str,
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> dict:
|
||||
"""Return details for a single generated image."""
|
||||
user_id = current_user.get("id") or current_user.get("sub")
|
||||
conn = get_conn()
|
||||
row = conn.execute(
|
||||
"""SELECT id, model_id, prompt, image_data, created_at
|
||||
FROM generated_images
|
||||
WHERE id = ? AND user_id = ?""",
|
||||
[image_id, user_id],
|
||||
).fetchone()
|
||||
if not row:
|
||||
raise HTTPException(status_code=404, detail="Image not found")
|
||||
return {
|
||||
"id": str(row[0]),
|
||||
"model_id": row[1],
|
||||
"prompt": row[2],
|
||||
"image_data": row[3],
|
||||
"created_at": row[4].isoformat() if row[4] else None,
|
||||
}
|
||||
|
||||
|
||||
@router.post("/video", response_model=VideoResponse)
|
||||
async def generate_video(
|
||||
body: VideoRequest,
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> VideoResponse:
|
||||
"""Queue a text-to-video generation job for background processing."""
|
||||
user_id = current_user.get("id") or current_user.get("sub")
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
request_params = json.dumps({
|
||||
"model": body.model,
|
||||
"prompt": body.prompt,
|
||||
"duration_seconds": body.duration_seconds,
|
||||
"aspect_ratio": body.aspect_ratio,
|
||||
"resolution": body.resolution,
|
||||
})
|
||||
db_id = None
|
||||
async with get_write_lock():
|
||||
conn = get_conn()
|
||||
row = conn.execute(
|
||||
"""INSERT INTO generated_videos
|
||||
(user_id, job_id, model_id, prompt, status, request_params, generation_type, created_at, updated_at)
|
||||
VALUES (?, ?, ?, ?, 'queued', ?, 'text_to_video', ?, ?) RETURNING id""",
|
||||
[user_id, "", body.model, body.prompt, request_params, now, now],
|
||||
).fetchone()
|
||||
if row:
|
||||
db_id = str(row[0])
|
||||
return VideoResponse(
|
||||
id="",
|
||||
db_id=db_id,
|
||||
model=body.model,
|
||||
status="queued",
|
||||
)
|
||||
|
||||
|
||||
@router.post("/video/from-image", response_model=VideoResponse)
|
||||
async def generate_video_from_image(
|
||||
body: VideoFromImageRequest,
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> VideoResponse:
|
||||
"""Queue an image-to-video generation job for background processing."""
|
||||
user_id = current_user.get("id") or current_user.get("sub")
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
request_params = json.dumps({
|
||||
"model": body.model,
|
||||
"image_url": body.image_url,
|
||||
"prompt": body.prompt,
|
||||
"duration_seconds": body.duration_seconds,
|
||||
"aspect_ratio": body.aspect_ratio,
|
||||
"resolution": body.resolution,
|
||||
})
|
||||
db_id = None
|
||||
async with get_write_lock():
|
||||
conn = get_conn()
|
||||
row = conn.execute(
|
||||
"""INSERT INTO generated_videos
|
||||
(user_id, job_id, model_id, prompt, status, request_params, generation_type, created_at, updated_at)
|
||||
VALUES (?, ?, ?, ?, 'queued', ?, 'image_to_video', ?, ?) RETURNING id""",
|
||||
[user_id, "", body.model, body.prompt, request_params, now, now],
|
||||
).fetchone()
|
||||
if row:
|
||||
db_id = str(row[0])
|
||||
return VideoResponse(
|
||||
id="",
|
||||
db_id=db_id,
|
||||
model=body.model,
|
||||
status="queued",
|
||||
)
|
||||
|
||||
|
||||
@router.get("/video/status", response_model=VideoResponse)
|
||||
async def poll_video_status(
|
||||
polling_url: str,
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> VideoResponse:
|
||||
"""Poll status of a video generation job; updates DB row when completed/failed."""
|
||||
try:
|
||||
result = await openrouter.poll_video_status(polling_url)
|
||||
except Exception as exc:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_502_BAD_GATEWAY, detail=f"OpenRouter error: {exc}")
|
||||
|
||||
job_status = result.get("status", "processing")
|
||||
urls = result.get("unsigned_urls") or result.get("video_urls")
|
||||
video_url = (urls or [None])[0]
|
||||
|
||||
# Update DB row for this job when terminal state reached
|
||||
if job_status in ("completed", "failed"):
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
async with get_write_lock():
|
||||
conn = get_conn()
|
||||
conn.execute(
|
||||
"""UPDATE generated_videos
|
||||
SET status = ?, video_url = ?, updated_at = ?
|
||||
WHERE job_id = ?""",
|
||||
[job_status, video_url, now, result.get("id", "")],
|
||||
)
|
||||
|
||||
return VideoResponse(
|
||||
id=result.get("id", ""),
|
||||
model=result.get("model", ""),
|
||||
status=job_status,
|
||||
polling_url=result.get("polling_url"),
|
||||
video_urls=urls,
|
||||
video_url=video_url,
|
||||
error=result.get("error"),
|
||||
metadata=result.get("metadata"),
|
||||
)
|
||||
|
||||
|
||||
@router.get("/videos")
|
||||
async def list_generated_videos(
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> list[dict]:
|
||||
"""Return all generated video jobs for the current user, newest first."""
|
||||
user_id = current_user.get("id") or current_user.get("sub")
|
||||
conn = get_conn()
|
||||
rows = conn.execute(
|
||||
"""SELECT id, job_id, model_id, prompt, polling_url, status, video_url, error, created_at
|
||||
FROM generated_videos
|
||||
WHERE user_id = ?
|
||||
ORDER BY created_at DESC""",
|
||||
[user_id],
|
||||
).fetchall()
|
||||
return [
|
||||
{
|
||||
"id": str(r[0]),
|
||||
"job_id": r[1],
|
||||
"model_id": r[2],
|
||||
"prompt": r[3],
|
||||
"polling_url": r[4],
|
||||
"status": r[5],
|
||||
"video_url": r[6],
|
||||
"error": r[7],
|
||||
"created_at": r[8].isoformat() if r[8] else None,
|
||||
}
|
||||
for r in rows
|
||||
]
|
||||
|
||||
|
||||
@router.get("/videos/{video_id}")
|
||||
async def get_generated_video(
|
||||
video_id: str,
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> dict:
|
||||
"""Return details for a single video generation job."""
|
||||
user_id = current_user.get("id") or current_user.get("sub")
|
||||
conn = get_conn()
|
||||
row = conn.execute(
|
||||
"""SELECT id, job_id, model_id, prompt, polling_url, status, video_url, error, created_at, updated_at
|
||||
FROM generated_videos
|
||||
WHERE id = ? AND user_id = ?""",
|
||||
[video_id, user_id],
|
||||
).fetchone()
|
||||
if not row:
|
||||
raise HTTPException(status_code=404, detail="Video job not found")
|
||||
return {
|
||||
"id": str(row[0]),
|
||||
"job_id": row[1],
|
||||
"model_id": row[2],
|
||||
"prompt": row[3],
|
||||
"polling_url": row[4],
|
||||
"status": row[5],
|
||||
"video_url": row[6],
|
||||
"error": row[7],
|
||||
"created_at": row[8].isoformat() if row[8] else None,
|
||||
"updated_at": row[9].isoformat() if row[9] else None,
|
||||
}
|
||||
|
||||
|
||||
@router.post("/videos/{video_id}/cancel", status_code=200)
|
||||
async def cancel_video_job(
|
||||
video_id: str,
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> dict[str, str]:
|
||||
"""Mark a video job as 'cancelled' if it belongs to the current user and is not terminal."""
|
||||
user_id = current_user.get("id") or current_user.get("sub")
|
||||
conn = get_conn()
|
||||
row = conn.execute(
|
||||
"SELECT status FROM generated_videos WHERE id = ? AND user_id = ?",
|
||||
[video_id, user_id],
|
||||
).fetchone()
|
||||
if not row:
|
||||
raise HTTPException(status_code=404, detail="Video job not found")
|
||||
job_status = row[0]
|
||||
if job_status in ("completed", "failed", "cancelled"):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail=f"Cannot cancel job with status '{job_status}'",
|
||||
)
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
async with get_write_lock():
|
||||
conn.execute(
|
||||
"UPDATE generated_videos SET status = 'cancelled', updated_at = ? WHERE id = ?",
|
||||
[now, video_id],
|
||||
)
|
||||
return {"status": "ok", "job_id": video_id}
|
||||
@@ -0,0 +1,150 @@
|
||||
"""Images router: upload reference images and list user's uploads."""
|
||||
import os
|
||||
import uuid
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, UploadFile, status
|
||||
from fastapi.responses import FileResponse
|
||||
|
||||
from ..db import get_conn, get_write_lock
|
||||
from ..dependencies import get_current_user
|
||||
|
||||
router = APIRouter(prefix="/images", tags=["images"])
|
||||
|
||||
UPLOAD_DIR = os.getenv("UPLOAD_DIR", "data/uploads")
|
||||
MAX_SIZE_BYTES = 10 * 1024 * 1024 # 10 MB
|
||||
ALLOWED_CONTENT_TYPES = {"image/jpeg", "image/png", "image/webp", "image/gif"}
|
||||
|
||||
|
||||
@router.post("/upload", status_code=status.HTTP_201_CREATED)
|
||||
async def upload_image(
|
||||
file: UploadFile,
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> dict:
|
||||
"""Upload a reference image and store metadata in DuckDB."""
|
||||
if file.content_type not in ALLOWED_CONTENT_TYPES:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_415_UNSUPPORTED_MEDIA_TYPE,
|
||||
detail=f"Unsupported content type '{file.content_type}'. Allowed: {sorted(ALLOWED_CONTENT_TYPES)}",
|
||||
)
|
||||
|
||||
data = await file.read()
|
||||
if len(data) > MAX_SIZE_BYTES:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_413_REQUEST_ENTITY_TOO_LARGE,
|
||||
detail=f"File exceeds maximum allowed size of {MAX_SIZE_BYTES // (1024*1024)} MB.",
|
||||
)
|
||||
|
||||
user_id = current_user["id"]
|
||||
image_id = str(uuid.uuid4())
|
||||
ext = (file.filename or "").rsplit(
|
||||
".", 1)[-1].lower() if "." in (file.filename or "") else "bin"
|
||||
safe_filename = f"{image_id}.{ext}"
|
||||
user_dir = os.path.join(UPLOAD_DIR, user_id)
|
||||
os.makedirs(user_dir, exist_ok=True)
|
||||
file_path = os.path.join(user_dir, safe_filename)
|
||||
|
||||
with open(file_path, "wb") as f:
|
||||
f.write(data)
|
||||
|
||||
async with get_write_lock():
|
||||
conn = get_conn()
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO uploaded_images (id, user_id, filename, content_type, file_path, size_bytes)
|
||||
VALUES (?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
[image_id, user_id, file.filename or safe_filename,
|
||||
file.content_type, file_path, len(data)],
|
||||
)
|
||||
|
||||
return {
|
||||
"id": image_id,
|
||||
"filename": file.filename or safe_filename,
|
||||
"content_type": file.content_type,
|
||||
"size_bytes": len(data),
|
||||
}
|
||||
|
||||
|
||||
@router.get("/", status_code=status.HTTP_200_OK)
|
||||
async def list_images(
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> list[dict]:
|
||||
"""Return all uploaded images for the current user."""
|
||||
conn = get_conn()
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT id, filename, content_type, size_bytes, created_at
|
||||
FROM uploaded_images
|
||||
WHERE user_id = ?
|
||||
ORDER BY created_at DESC
|
||||
""",
|
||||
[current_user["id"]],
|
||||
).fetchall()
|
||||
|
||||
return [
|
||||
{
|
||||
"id": str(row[0]),
|
||||
"filename": row[1],
|
||||
"content_type": row[2],
|
||||
"size_bytes": row[3],
|
||||
"created_at": row[4].isoformat() if row[4] else None,
|
||||
}
|
||||
for row in rows
|
||||
]
|
||||
|
||||
|
||||
@router.get("/{image_id}", status_code=status.HTTP_200_OK)
|
||||
async def get_image_details(
|
||||
image_id: str,
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> dict:
|
||||
"""Return metadata for a single uploaded image."""
|
||||
conn = get_conn()
|
||||
row = conn.execute(
|
||||
"""
|
||||
SELECT id, filename, content_type, size_bytes, created_at
|
||||
FROM uploaded_images
|
||||
WHERE id = ? AND user_id = ?
|
||||
""",
|
||||
[image_id, current_user["id"]],
|
||||
).fetchone()
|
||||
|
||||
if not row:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND, detail="Image not found"
|
||||
)
|
||||
|
||||
return {
|
||||
"id": str(row[0]),
|
||||
"filename": row[1],
|
||||
"content_type": row[2],
|
||||
"size_bytes": row[3],
|
||||
"created_at": row[4].isoformat() if row[4] else None,
|
||||
}
|
||||
|
||||
|
||||
@router.get("/{image_id}/file", status_code=status.HTTP_200_OK)
|
||||
async def serve_image(
|
||||
image_id: str,
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> FileResponse:
|
||||
"""Serve the raw image file. Only accessible by the owning user."""
|
||||
conn = get_conn()
|
||||
row = conn.execute(
|
||||
"SELECT file_path, content_type, user_id FROM uploaded_images WHERE id = ?",
|
||||
[image_id],
|
||||
).fetchone()
|
||||
|
||||
if row is None:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND, detail="Image not found.")
|
||||
if str(row[2]) != current_user["id"]:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN, detail="Access denied.")
|
||||
|
||||
file_path: str = row[0]
|
||||
if not os.path.isfile(file_path):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND, detail="Image file missing.")
|
||||
|
||||
return FileResponse(file_path, media_type=row[1])
|
||||
@@ -0,0 +1,47 @@
|
||||
"""Models router: list and refresh the OpenRouter model cache."""
|
||||
from fastapi import APIRouter, Depends, HTTPException, Query, status
|
||||
|
||||
from ..db import get_conn, get_write_lock
|
||||
from ..dependencies import get_current_user, require_admin
|
||||
from ..services import models as models_service
|
||||
|
||||
router = APIRouter(prefix="/models", tags=["models"])
|
||||
|
||||
|
||||
@router.get("/")
|
||||
async def list_models(
|
||||
modality: str | None = Query(
|
||||
None,
|
||||
description="Filter by output modality: text, image, video, audio",
|
||||
),
|
||||
_: dict = Depends(get_current_user),
|
||||
):
|
||||
"""Return cached models. Auto-refreshes cache if stale (older than 24 h)."""
|
||||
conn = get_conn()
|
||||
if models_service.is_cache_stale(conn):
|
||||
async with get_write_lock():
|
||||
# Re-check inside lock to avoid redundant parallel refreshes
|
||||
if models_service.is_cache_stale(conn):
|
||||
try:
|
||||
await models_service.refresh_models_cache(conn)
|
||||
except Exception as exc:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"Failed to refresh model cache: {exc}",
|
||||
)
|
||||
return models_service.get_cached_models(conn, modality)
|
||||
|
||||
|
||||
@router.post("/refresh", status_code=200)
|
||||
async def refresh_models(_: dict = Depends(require_admin)):
|
||||
"""Force-refresh the model cache from OpenRouter. Admin only."""
|
||||
conn = get_conn()
|
||||
async with get_write_lock():
|
||||
try:
|
||||
count = await models_service.refresh_models_cache(conn)
|
||||
except Exception as exc:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"OpenRouter error: {exc}",
|
||||
)
|
||||
return {"refreshed": count}
|
||||
@@ -0,0 +1,86 @@
|
||||
"""Users router: self-service profile and admin user management."""
|
||||
from fastapi import APIRouter, Depends, HTTPException, status
|
||||
|
||||
from ..dependencies import get_current_user, require_admin
|
||||
from ..models.users import SetRoleRequest, UpdateUserRequest, UserResponse
|
||||
from ..services.users import (
|
||||
delete_user,
|
||||
get_user,
|
||||
list_users,
|
||||
set_user_role,
|
||||
update_user,
|
||||
)
|
||||
|
||||
router = APIRouter(prefix="/users", tags=["users"])
|
||||
|
||||
ALLOWED_ROLES = {"user", "admin"}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Self-service
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
@router.get("/me", response_model=UserResponse)
|
||||
async def get_me(current_user: dict = Depends(get_current_user)) -> UserResponse:
|
||||
user = await get_user(current_user["id"])
|
||||
if user is None:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND, detail="User not found.")
|
||||
return UserResponse(**user)
|
||||
|
||||
|
||||
@router.put("/me", response_model=UserResponse)
|
||||
async def update_me(
|
||||
body: UpdateUserRequest,
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> UserResponse:
|
||||
try:
|
||||
user = await update_user(current_user["id"], email=body.email, password=body.password)
|
||||
except ValueError as exc:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_409_CONFLICT, detail=str(exc))
|
||||
if user is None:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND, detail="User not found.")
|
||||
return UserResponse(**user)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Admin
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
@router.get("", response_model=list[UserResponse])
|
||||
async def get_all_users(_: dict = Depends(require_admin)) -> list[UserResponse]:
|
||||
users = await list_users()
|
||||
return [UserResponse(**u) for u in users]
|
||||
|
||||
|
||||
@router.delete("/{user_id}", status_code=status.HTTP_204_NO_CONTENT)
|
||||
async def remove_user(
|
||||
user_id: str,
|
||||
current_user: dict = Depends(require_admin),
|
||||
) -> None:
|
||||
if user_id == current_user["id"]:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail="Cannot delete your own account.",
|
||||
)
|
||||
await delete_user(user_id)
|
||||
|
||||
|
||||
@router.put("/{user_id}/role", response_model=UserResponse)
|
||||
async def change_role(
|
||||
user_id: str,
|
||||
body: SetRoleRequest,
|
||||
_: dict = Depends(require_admin),
|
||||
) -> UserResponse:
|
||||
if body.role not in ALLOWED_ROLES:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
|
||||
detail=f"Role must be one of: {', '.join(sorted(ALLOWED_ROLES))}.",
|
||||
)
|
||||
user = await set_user_role(user_id, body.role)
|
||||
if user is None:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND, detail="User not found.")
|
||||
return UserResponse(**user)
|
||||
@@ -0,0 +1,127 @@
|
||||
"""Authentication service: password hashing, JWT creation/verification, token management."""
|
||||
import os
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from typing import Any
|
||||
|
||||
from jose import JWTError, jwt
|
||||
from passlib.context import CryptContext
|
||||
|
||||
from ..db import get_conn, get_write_lock
|
||||
|
||||
_pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
|
||||
|
||||
ACCESS_TOKEN_EXPIRE_MINUTES = 15
|
||||
REFRESH_TOKEN_EXPIRE_DAYS = 7
|
||||
ALGORITHM = "HS256"
|
||||
|
||||
|
||||
def _secret() -> str:
|
||||
secret = os.getenv("JWT_SECRET")
|
||||
if not secret:
|
||||
raise RuntimeError("JWT_SECRET environment variable is not set.")
|
||||
return secret
|
||||
|
||||
|
||||
# --- Password ---
|
||||
|
||||
def hash_password(plain: str) -> str:
|
||||
return _pwd_context.hash(plain)
|
||||
|
||||
|
||||
def verify_password(plain: str, hashed: str) -> bool:
|
||||
return _pwd_context.verify(plain, hashed)
|
||||
|
||||
|
||||
# --- Tokens ---
|
||||
|
||||
def create_access_token(user_id: str, email: str, role: str) -> str:
|
||||
expire = datetime.now(timezone.utc) + \
|
||||
timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
|
||||
payload = {
|
||||
"sub": user_id,
|
||||
"email": email,
|
||||
"role": role,
|
||||
"exp": expire,
|
||||
"type": "access",
|
||||
}
|
||||
return jwt.encode(payload, _secret(), algorithm=ALGORITHM)
|
||||
|
||||
|
||||
def create_refresh_token(user_id: str, jti: str) -> str:
|
||||
expire = datetime.now(timezone.utc) + \
|
||||
timedelta(days=REFRESH_TOKEN_EXPIRE_DAYS)
|
||||
payload = {
|
||||
"sub": user_id,
|
||||
"jti": jti,
|
||||
"exp": expire,
|
||||
"type": "refresh",
|
||||
}
|
||||
return jwt.encode(payload, _secret(), algorithm=ALGORITHM)
|
||||
|
||||
|
||||
def decode_token(token: str) -> dict[str, Any]:
|
||||
"""Decode and validate a JWT. Raises JWTError on failure."""
|
||||
return jwt.decode(token, _secret(), algorithms=[ALGORITHM])
|
||||
|
||||
|
||||
# --- Database operations ---
|
||||
|
||||
async def register_user(email: str, password: str) -> dict[str, Any]:
|
||||
"""Insert a new user. Returns the created user row."""
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
sql_check = "SELECT id FROM users WHERE email = ?"
|
||||
sql_insert = "INSERT INTO users (email, password_hash) VALUES (?, ?)"
|
||||
sql_fetch = "SELECT id, email, role FROM users WHERE email = ?"
|
||||
async with lock:
|
||||
existing = conn.execute(sql_check, [email]).fetchone()
|
||||
if existing:
|
||||
raise ValueError("Email already registered.")
|
||||
conn.execute(sql_insert, [email, hash_password(password)],)
|
||||
row = conn.execute(sql_fetch, [email]).fetchone()
|
||||
if row is None:
|
||||
raise RuntimeError("Failed to fetch user after registration.")
|
||||
return {"id": str(row[0]), "email": row[1], "role": row[2]}
|
||||
|
||||
|
||||
async def authenticate_user(email: str, password: str) -> dict[str, Any] | None:
|
||||
"""Return user dict if credentials are valid, else None."""
|
||||
conn = get_conn()
|
||||
sql_fetch = "SELECT id, email, password_hash, role FROM users WHERE email = ?"
|
||||
row = conn.execute(sql_fetch, [email]).fetchone()
|
||||
if row is None or not verify_password(password, row[2]):
|
||||
return None
|
||||
return {"id": str(row[0]), "email": row[1], "role": row[3]}
|
||||
|
||||
|
||||
async def store_refresh_token(user_id: str, jti: str) -> None:
|
||||
"""Persist a refresh token JTI in the database."""
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
sql_insert = "INSERT INTO refresh_tokens (jti, user_id, expires_at) VALUES (?, ?, ?)"
|
||||
from datetime import timedelta
|
||||
expires_at = datetime.now(timezone.utc) + \
|
||||
timedelta(days=REFRESH_TOKEN_EXPIRE_DAYS)
|
||||
async with lock:
|
||||
conn.execute(sql_insert, [jti, user_id, expires_at])
|
||||
|
||||
|
||||
async def revoke_refresh_token(jti: str) -> None:
|
||||
"""Mark a refresh token as revoked."""
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
sql_update = "UPDATE refresh_tokens SET revoked = true WHERE jti = ?"
|
||||
async with lock:
|
||||
conn.execute(sql_update, [jti])
|
||||
|
||||
|
||||
async def validate_refresh_token_jti(jti: str, user_id: str) -> bool:
|
||||
"""Return True if the JTI exists, is not revoked, and belongs to user_id."""
|
||||
conn = get_conn()
|
||||
now = datetime.now(timezone.utc)
|
||||
sql_select = """
|
||||
SELECT 1 FROM refresh_tokens
|
||||
WHERE jti = ? AND user_id = ? AND revoked = false AND expires_at > ?
|
||||
"""
|
||||
row = conn.execute(sql_select, [jti, user_id, now]).fetchone()
|
||||
return row is not None
|
||||
@@ -0,0 +1,246 @@
|
||||
"""Model cache service: fetch from OpenRouter, store in DuckDB."""
|
||||
import json
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from typing import Any
|
||||
|
||||
import duckdb
|
||||
|
||||
from . import openrouter
|
||||
|
||||
CACHE_TTL_HOURS = 24
|
||||
|
||||
|
||||
def _normalize_modality(raw: str) -> str:
|
||||
"""Normalize OpenRouter modality labels to canonical values."""
|
||||
value = (raw or "").strip().lower()
|
||||
if value in {"text", "image", "video", "audio", "embeddings", "embedding"}:
|
||||
return "embeddings" if value == "embedding" else value
|
||||
if "image" in value:
|
||||
return "image"
|
||||
if "video" in value:
|
||||
return "video"
|
||||
if "audio" in value:
|
||||
return "audio"
|
||||
if "embed" in value:
|
||||
return "embeddings"
|
||||
return "text"
|
||||
|
||||
|
||||
def _parse_modality(raw_modality: str) -> str:
|
||||
"""Extract output modality from OpenRouter architecture.modality string.
|
||||
|
||||
Examples: "text->text", "text+image->text", "text->image", "text->video"
|
||||
"""
|
||||
output = raw_modality.split(
|
||||
"->", 1)[-1] if "->" in raw_modality else raw_modality
|
||||
return _normalize_modality(output)
|
||||
|
||||
|
||||
def _extract_output_modality(model: dict[str, Any]) -> str:
|
||||
"""Extract output modality using OpenRouter schema, fallback to legacy field."""
|
||||
architecture = model.get("architecture") or {}
|
||||
|
||||
output_modalities = architecture.get(
|
||||
"output_modalities") or model.get("output_modalities")
|
||||
if isinstance(output_modalities, list) and output_modalities:
|
||||
return _normalize_modality(str(output_modalities[0]))
|
||||
|
||||
raw_modality = architecture.get(
|
||||
"modality") or model.get("modality") or "text->text"
|
||||
if isinstance(raw_modality, str):
|
||||
return _parse_modality(raw_modality)
|
||||
return "text"
|
||||
|
||||
|
||||
async def _fetch_models_for_cache() -> list[dict[str, Any]]:
|
||||
"""Fetch broad + modality-specific lists and merge unique models by id."""
|
||||
by_id: dict[str, dict[str, Any]] = {}
|
||||
|
||||
# Primary fetch: all modalities (per OpenRouter docs).
|
||||
primary = await openrouter.list_models(output_modalities="all")
|
||||
for model in primary:
|
||||
model_id = model.get("id")
|
||||
if model_id:
|
||||
by_id[model_id] = model
|
||||
|
||||
# Warmup fetches: some providers surface better results with explicit modality filter.
|
||||
for modality in ("image", "video", "audio", "embeddings", "text"):
|
||||
try:
|
||||
subset = await openrouter.list_models(output_modalities=modality)
|
||||
except Exception:
|
||||
continue
|
||||
for model in subset:
|
||||
model_id = model.get("id")
|
||||
if model_id and model_id not in by_id:
|
||||
by_id[model_id] = model
|
||||
|
||||
return list(by_id.values())
|
||||
|
||||
|
||||
async def refresh_models_cache(conn: duckdb.DuckDBPyConnection) -> int:
|
||||
"""Fetch all models from OpenRouter and replace the cache. Returns count stored."""
|
||||
raw = await _fetch_models_for_cache()
|
||||
# Use naive UTC to avoid DuckDB TIMESTAMP tz-stripping inconsistencies
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
|
||||
conn.execute("DELETE FROM models_cache")
|
||||
count = 0
|
||||
for m in raw:
|
||||
modality = _extract_output_modality(m)
|
||||
pricing = m.get("pricing")
|
||||
model_id = m.get("id", "")
|
||||
if not model_id:
|
||||
continue
|
||||
# Full output_modalities array from architecture (for proper modalities param in image gen)
|
||||
architecture = m.get("architecture") or {}
|
||||
raw_output_modalities: list | None = (
|
||||
architecture.get("output_modalities") or m.get("output_modalities")
|
||||
)
|
||||
output_modalities_json: str | None = (
|
||||
json.dumps([_normalize_modality(str(v))
|
||||
for v in raw_output_modalities])
|
||||
if isinstance(raw_output_modalities, list)
|
||||
else None
|
||||
)
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO models_cache (model_id, name, modality, context_length, pricing, fetched_at, output_modalities)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?)
|
||||
ON CONFLICT (model_id) DO UPDATE SET
|
||||
name = excluded.name,
|
||||
modality = excluded.modality,
|
||||
context_length = excluded.context_length,
|
||||
pricing = excluded.pricing,
|
||||
fetched_at = excluded.fetched_at,
|
||||
output_modalities = excluded.output_modalities
|
||||
""",
|
||||
[
|
||||
model_id,
|
||||
m.get("name", model_id),
|
||||
modality,
|
||||
m.get("context_length"),
|
||||
json.dumps(pricing) if pricing else None,
|
||||
now,
|
||||
output_modalities_json,
|
||||
],
|
||||
)
|
||||
count += 1
|
||||
return count
|
||||
|
||||
|
||||
def is_cache_stale(conn: duckdb.DuckDBPyConnection) -> bool:
|
||||
"""Return True if cache is empty or last fetched more than CACHE_TTL_HOURS ago."""
|
||||
row = conn.execute("SELECT MAX(fetched_at) FROM models_cache").fetchone()
|
||||
if not row or row[0] is None:
|
||||
return True
|
||||
last_fetched = row[0]
|
||||
# DuckDB TIMESTAMP is always naive; compare against naive UTC
|
||||
if last_fetched.tzinfo is not None:
|
||||
last_fetched = last_fetched.replace(tzinfo=None)
|
||||
now_naive = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
return now_naive - last_fetched > timedelta(hours=CACHE_TTL_HOURS)
|
||||
|
||||
|
||||
def get_cached_models(
|
||||
conn: duckdb.DuckDBPyConnection,
|
||||
modality: str | None = None,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Return cached models, optionally filtered by modality, ordered by name."""
|
||||
if modality:
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT model_id, name, modality, context_length, pricing
|
||||
FROM models_cache
|
||||
WHERE modality = ?
|
||||
ORDER BY name
|
||||
""",
|
||||
[modality],
|
||||
).fetchall()
|
||||
else:
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT model_id, name, modality, context_length, pricing
|
||||
FROM models_cache
|
||||
ORDER BY name
|
||||
"""
|
||||
).fetchall()
|
||||
|
||||
result = []
|
||||
for row in rows:
|
||||
pricing = None
|
||||
if row[4]:
|
||||
try:
|
||||
pricing = json.loads(row[4])
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
pricing = None
|
||||
result.append({
|
||||
"id": row[0],
|
||||
"name": row[1],
|
||||
"modality": row[2],
|
||||
"context_length": row[3],
|
||||
"pricing": pricing,
|
||||
})
|
||||
return result
|
||||
|
||||
|
||||
def get_model_output_modalities(
|
||||
conn: duckdb.DuckDBPyConnection,
|
||||
model_id: str,
|
||||
) -> list[str]:
|
||||
"""Return output_modalities list for a model; empty list if not found."""
|
||||
row = conn.execute(
|
||||
"SELECT output_modalities FROM models_cache WHERE model_id = ?",
|
||||
[model_id],
|
||||
).fetchone()
|
||||
if not row or not row[0]:
|
||||
return []
|
||||
try:
|
||||
return json.loads(row[0])
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
return []
|
||||
|
||||
|
||||
def get_cache_status(conn: duckdb.DuckDBPyConnection) -> dict[str, Any]:
|
||||
"""Return cache last update time and model count."""
|
||||
row = conn.execute(
|
||||
"SELECT MAX(fetched_at), COUNT(*) FROM models_cache"
|
||||
).fetchone()
|
||||
last_updated, model_count = (row[0], row[1]) if row else (None, 0)
|
||||
return {"last_updated": last_updated, "model_count": model_count}
|
||||
|
||||
|
||||
def mark_timed_out_video_jobs(conn: duckdb.DuckDBPyConnection, timeout_minutes: int = 120) -> int:
|
||||
"""Mark video jobs that have been in 'queued' or 'processing' status for too long as 'failed'.
|
||||
|
||||
Returns the number of jobs marked as timed out.
|
||||
"""
|
||||
timeout_threshold = datetime.now(
|
||||
timezone.utc) - timedelta(minutes=timeout_minutes)
|
||||
|
||||
# Find timed out jobs
|
||||
timed_out_rows = conn.execute(
|
||||
"""
|
||||
SELECT id FROM generated_videos
|
||||
WHERE status IN ('queued', 'processing')
|
||||
AND updated_at < ?
|
||||
""",
|
||||
[timeout_threshold]
|
||||
).fetchall()
|
||||
|
||||
if not timed_out_rows:
|
||||
return 0
|
||||
|
||||
job_ids = [row[0] for row in timed_out_rows]
|
||||
placeholders = ",".join(["?"] * len(job_ids))
|
||||
|
||||
# Update them to failed
|
||||
conn.execute(
|
||||
f"""
|
||||
UPDATE generated_videos
|
||||
SET status = 'failed', updated_at = ?
|
||||
WHERE id IN ({placeholders})
|
||||
""",
|
||||
[datetime.now(timezone.utc)] + job_ids
|
||||
)
|
||||
|
||||
return len(job_ids)
|
||||
@@ -0,0 +1,213 @@
|
||||
"""OpenRouter API client (OpenAI-compatible interface)."""
|
||||
import os
|
||||
from typing import Any
|
||||
|
||||
import httpx
|
||||
|
||||
OPENROUTER_BASE_URL = "https://openrouter.ai/api/v1"
|
||||
|
||||
|
||||
def _api_key() -> str:
|
||||
key = os.getenv("OPENROUTER_API_KEY")
|
||||
if not key:
|
||||
raise RuntimeError(
|
||||
"OPENROUTER_API_KEY environment variable is not set.")
|
||||
return key
|
||||
|
||||
|
||||
def _headers() -> dict[str, str]:
|
||||
return {
|
||||
"Authorization": f"Bearer {_api_key()}",
|
||||
"Content-Type": "application/json",
|
||||
"HTTP-Referer": os.getenv("APP_URL", "https://ai.allucanget.biz"),
|
||||
"X-Title": os.getenv("APP_NAME", "All You Can GET AI"),
|
||||
}
|
||||
|
||||
|
||||
async def list_models(
|
||||
output_modalities: str = "all",
|
||||
category: str | None = None,
|
||||
supported_parameters: str | None = None,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Return available models from OpenRouter.
|
||||
|
||||
Docs: GET /models supports query filters like output_modalities.
|
||||
"""
|
||||
base_url = os.getenv("OPENROUTER_BASE_URL", OPENROUTER_BASE_URL)
|
||||
params: dict[str, str] = {"output_modalities": output_modalities}
|
||||
if category:
|
||||
params["category"] = category
|
||||
if supported_parameters:
|
||||
params["supported_parameters"] = supported_parameters
|
||||
|
||||
async with httpx.AsyncClient(timeout=15) as client:
|
||||
resp = client.build_request(
|
||||
"GET", f"{base_url}/models", headers=_headers(), params=params)
|
||||
response = await client.send(resp)
|
||||
response.raise_for_status()
|
||||
return response.json().get("data", [])
|
||||
|
||||
|
||||
async def chat_completion(
|
||||
model: str,
|
||||
messages: list[dict[str, str]],
|
||||
temperature: float = 0.7,
|
||||
max_tokens: int = 1024,
|
||||
) -> dict[str, Any]:
|
||||
"""Send a chat completion request to OpenRouter."""
|
||||
base_url = os.getenv("OPENROUTER_BASE_URL", OPENROUTER_BASE_URL)
|
||||
payload = {
|
||||
"model": model,
|
||||
"messages": messages,
|
||||
"temperature": temperature,
|
||||
"max_tokens": max_tokens,
|
||||
}
|
||||
async with httpx.AsyncClient(timeout=60) as client:
|
||||
resp = client.build_request(
|
||||
"POST", f"{base_url}/chat/completions", headers=_headers(), json=payload
|
||||
)
|
||||
response = await client.send(resp)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
|
||||
|
||||
async def generate_image(
|
||||
model: str,
|
||||
prompt: str,
|
||||
n: int = 1,
|
||||
size: str = "1024x1024",
|
||||
) -> dict[str, Any]:
|
||||
"""Request image generation via OpenRouter /images/generations."""
|
||||
base_url = os.getenv("OPENROUTER_BASE_URL", OPENROUTER_BASE_URL)
|
||||
payload = {"model": model, "prompt": prompt, "n": n, "size": size}
|
||||
async with httpx.AsyncClient(timeout=120) as client:
|
||||
resp = client.build_request(
|
||||
"POST", f"{base_url}/images/generations", headers=_headers(), json=payload
|
||||
)
|
||||
response = await client.send(resp)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
|
||||
|
||||
async def generate_video(
|
||||
model: str,
|
||||
prompt: str,
|
||||
duration_seconds: int | None = None,
|
||||
aspect_ratio: str = "16:9",
|
||||
resolution: str | None = None,
|
||||
generate_audio: bool | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Request text-to-video generation via OpenRouter POST /videos."""
|
||||
base_url = os.getenv("OPENROUTER_BASE_URL", OPENROUTER_BASE_URL)
|
||||
payload: dict[str, Any] = {
|
||||
"model": model,
|
||||
"prompt": prompt,
|
||||
"aspect_ratio": aspect_ratio,
|
||||
}
|
||||
if duration_seconds is not None:
|
||||
# API uses 'duration' not 'duration_seconds'
|
||||
payload["duration"] = duration_seconds
|
||||
if resolution is not None:
|
||||
payload["resolution"] = resolution
|
||||
if generate_audio is not None:
|
||||
payload["generate_audio"] = generate_audio
|
||||
async with httpx.AsyncClient(timeout=120) as client:
|
||||
resp = client.build_request(
|
||||
"POST", f"{base_url}/videos", headers=_headers(), json=payload
|
||||
)
|
||||
response = await client.send(resp)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
|
||||
|
||||
async def generate_video_from_image(
|
||||
model: str,
|
||||
image_url: str,
|
||||
prompt: str,
|
||||
duration_seconds: int | None = None,
|
||||
aspect_ratio: str = "16:9",
|
||||
resolution: str | None = None,
|
||||
generate_audio: bool | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Request image-to-video generation via OpenRouter POST /videos.
|
||||
|
||||
Uses frame_images array with first_frame as per OpenRouter API spec.
|
||||
"""
|
||||
base_url = os.getenv("OPENROUTER_BASE_URL", OPENROUTER_BASE_URL)
|
||||
payload: dict[str, Any] = {
|
||||
"model": model,
|
||||
"prompt": prompt,
|
||||
"aspect_ratio": aspect_ratio,
|
||||
"frame_images": [
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {"url": image_url},
|
||||
"frame_type": "first_frame",
|
||||
}
|
||||
],
|
||||
}
|
||||
if duration_seconds is not None:
|
||||
payload["duration"] = duration_seconds
|
||||
if resolution is not None:
|
||||
payload["resolution"] = resolution
|
||||
if generate_audio is not None:
|
||||
payload["generate_audio"] = generate_audio
|
||||
async with httpx.AsyncClient(timeout=120) as client:
|
||||
resp = client.build_request(
|
||||
"POST", f"{base_url}/videos", headers=_headers(), json=payload
|
||||
)
|
||||
response = await client.send(resp)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
|
||||
|
||||
async def poll_video_status(polling_url: str) -> dict[str, Any]:
|
||||
"""Check the status of a video generation job via its polling_url."""
|
||||
async with httpx.AsyncClient(timeout=15) as client:
|
||||
resp = client.build_request("GET", polling_url, headers=_headers())
|
||||
response = await client.send(resp)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
|
||||
|
||||
async def list_video_models() -> list[dict[str, Any]]:
|
||||
"""Return video generation models from the dedicated /videos/models endpoint."""
|
||||
base_url = os.getenv("OPENROUTER_BASE_URL", OPENROUTER_BASE_URL)
|
||||
async with httpx.AsyncClient(timeout=15) as client:
|
||||
resp = client.build_request(
|
||||
"GET", f"{base_url}/videos/models", headers=_headers()
|
||||
)
|
||||
response = await client.send(resp)
|
||||
response.raise_for_status()
|
||||
return response.json().get("data", [])
|
||||
|
||||
|
||||
async def generate_image_chat(
|
||||
model: str,
|
||||
prompt: str,
|
||||
modalities: list[str] | None = None,
|
||||
image_config: dict[str, Any] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Request image generation via Chat Completions with modalities.
|
||||
|
||||
Used by models like FLUX.2 Klein 4B and GPT-5 Image Mini that output
|
||||
images through the chat completions endpoint rather than /images/generations.
|
||||
"""
|
||||
base_url = os.getenv("OPENROUTER_BASE_URL", OPENROUTER_BASE_URL)
|
||||
if modalities is None:
|
||||
# Image-only models (FLUX) vs multimodal (GPT-5 Image Mini)
|
||||
modalities = ["image"]
|
||||
payload: dict[str, Any] = {
|
||||
"model": model,
|
||||
"messages": [{"role": "user", "content": prompt}],
|
||||
"modalities": modalities,
|
||||
}
|
||||
if image_config:
|
||||
payload["image_config"] = image_config
|
||||
async with httpx.AsyncClient(timeout=120) as client:
|
||||
resp = client.build_request(
|
||||
"POST", f"{base_url}/chat/completions", headers=_headers(), json=payload
|
||||
)
|
||||
response = await client.send(resp)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
@@ -0,0 +1,86 @@
|
||||
"""User management service: CRUD helpers against DuckDB."""
|
||||
from typing import Any
|
||||
|
||||
from ..db import get_conn, get_write_lock
|
||||
from .auth import hash_password
|
||||
|
||||
|
||||
async def get_user(user_id: str) -> dict[str, Any] | None:
|
||||
conn = get_conn()
|
||||
row = conn.execute(
|
||||
"SELECT id, email, role FROM users WHERE id = ?", [user_id]
|
||||
).fetchone()
|
||||
if row is None:
|
||||
return None
|
||||
return {"id": str(row[0]), "email": row[1], "role": row[2]}
|
||||
|
||||
|
||||
async def list_users() -> list[dict[str, Any]]:
|
||||
conn = get_conn()
|
||||
rows = conn.execute(
|
||||
"SELECT id, email, role FROM users ORDER BY email").fetchall()
|
||||
return [{"id": str(r[0]), "email": r[1], "role": r[2]} for r in rows]
|
||||
|
||||
|
||||
async def update_user(
|
||||
user_id: str,
|
||||
email: str | None = None,
|
||||
password: str | None = None,
|
||||
) -> dict[str, Any] | None:
|
||||
"""Update email and/or password. Returns updated user or None if not found."""
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
|
||||
if email is None and password is None:
|
||||
return await get_user(user_id)
|
||||
|
||||
async with lock:
|
||||
if email is not None:
|
||||
existing = conn.execute(
|
||||
"SELECT id FROM users WHERE email = ? AND id != ?", [
|
||||
email, user_id]
|
||||
).fetchone()
|
||||
if existing:
|
||||
raise ValueError("Email already in use.")
|
||||
conn.execute(
|
||||
"UPDATE users SET email = ?, updated_at = now() WHERE id = ?",
|
||||
[email, user_id],
|
||||
)
|
||||
if password is not None:
|
||||
conn.execute(
|
||||
"UPDATE users SET password_hash = ?, updated_at = now() WHERE id = ?",
|
||||
[hash_password(password), user_id],
|
||||
)
|
||||
row = conn.execute(
|
||||
"SELECT id, email, role FROM users WHERE id = ?", [user_id]
|
||||
).fetchone()
|
||||
|
||||
if row is None:
|
||||
return None
|
||||
return {"id": str(row[0]), "email": row[1], "role": row[2]}
|
||||
|
||||
|
||||
async def set_user_role(user_id: str, role: str) -> dict[str, Any] | None:
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
async with lock:
|
||||
conn.execute(
|
||||
"UPDATE users SET role = ?, updated_at = now() WHERE id = ?",
|
||||
[role, user_id],
|
||||
)
|
||||
row = conn.execute(
|
||||
"SELECT id, email, role FROM users WHERE id = ?", [user_id]
|
||||
).fetchone()
|
||||
if row is None:
|
||||
return None
|
||||
return {"id": str(row[0]), "email": row[1], "role": row[2]}
|
||||
|
||||
|
||||
async def delete_user(user_id: str) -> bool:
|
||||
"""Delete user and their refresh tokens. Returns True if a row was removed."""
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
async with lock:
|
||||
conn.execute("DELETE FROM refresh_tokens WHERE user_id = ?", [user_id])
|
||||
conn.execute("DELETE FROM users WHERE id = ?", [user_id])
|
||||
return True
|
||||
@@ -0,0 +1,159 @@
|
||||
"""Background worker: processes queued/processing video generation jobs."""
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
from datetime import datetime, timezone
|
||||
|
||||
import duckdb
|
||||
|
||||
from . import openrouter
|
||||
from .models import mark_timed_out_video_jobs
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Interval between worker ticks (seconds)
|
||||
WORKER_INTERVAL = 15
|
||||
# Jobs to process per tick (prevents unbounded bursts)
|
||||
BATCH_SIZE = 5
|
||||
|
||||
|
||||
async def process_queued_jobs(conn: duckdb.DuckDBPyConnection, lock: asyncio.Lock) -> int:
|
||||
"""Submit queued jobs to OpenRouter and transition them to 'processing'."""
|
||||
rows = conn.execute(
|
||||
"""SELECT id, generation_type, request_params
|
||||
FROM generated_videos
|
||||
WHERE status = 'queued' AND request_params IS NOT NULL
|
||||
ORDER BY created_at ASC
|
||||
LIMIT ?""",
|
||||
[BATCH_SIZE],
|
||||
).fetchall()
|
||||
|
||||
processed = 0
|
||||
for row in rows:
|
||||
db_id, generation_type, raw_params = str(row[0]), row[1], row[2]
|
||||
try:
|
||||
params = json.loads(raw_params)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
logger.error("Bad request_params for video job %s", db_id)
|
||||
continue
|
||||
|
||||
try:
|
||||
if generation_type == "image_to_video":
|
||||
result = await openrouter.generate_video_from_image(
|
||||
model=params["model"],
|
||||
image_url=params.get("image_url", ""),
|
||||
prompt=params.get("prompt", ""),
|
||||
duration_seconds=params.get("duration_seconds"),
|
||||
aspect_ratio=params.get("aspect_ratio", "16:9"),
|
||||
resolution=params.get("resolution"),
|
||||
)
|
||||
else:
|
||||
result = await openrouter.generate_video(
|
||||
model=params["model"],
|
||||
prompt=params.get("prompt", ""),
|
||||
duration_seconds=params.get("duration_seconds"),
|
||||
aspect_ratio=params.get("aspect_ratio", "16:9"),
|
||||
resolution=params.get("resolution"),
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.warning("OpenRouter call failed for job %s: %s", db_id, exc)
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
async with lock:
|
||||
conn.execute(
|
||||
"UPDATE generated_videos SET status = 'failed', error = ?, updated_at = ? WHERE id = ?",
|
||||
[str(exc), now, db_id],
|
||||
)
|
||||
continue
|
||||
|
||||
job_id = result.get("id", "")
|
||||
polling_url = result.get("polling_url")
|
||||
new_status = result.get("status", "processing")
|
||||
# Normalise terminal statuses returned immediately (rare but possible)
|
||||
if new_status not in ("queued", "processing", "completed", "failed", "cancelled"):
|
||||
new_status = "processing"
|
||||
|
||||
urls = result.get("unsigned_urls") or result.get("video_urls")
|
||||
video_url = (urls or [None])[0]
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
|
||||
async with lock:
|
||||
conn.execute(
|
||||
"""UPDATE generated_videos
|
||||
SET job_id = ?, polling_url = ?, status = ?, video_url = ?, updated_at = ?
|
||||
WHERE id = ?""",
|
||||
[job_id, polling_url, new_status, video_url, now, db_id],
|
||||
)
|
||||
processed += 1
|
||||
logger.info("Video job %s → %s (provider id: %s)",
|
||||
db_id, new_status, job_id)
|
||||
|
||||
return processed
|
||||
|
||||
|
||||
async def process_processing_jobs(conn: duckdb.DuckDBPyConnection, lock: asyncio.Lock) -> int:
|
||||
"""Poll in-progress jobs and update to 'completed' or 'failed'."""
|
||||
rows = conn.execute(
|
||||
"""SELECT id, polling_url
|
||||
FROM generated_videos
|
||||
WHERE status = 'processing' AND polling_url IS NOT NULL
|
||||
ORDER BY updated_at ASC
|
||||
LIMIT ?""",
|
||||
[BATCH_SIZE],
|
||||
).fetchall()
|
||||
|
||||
updated = 0
|
||||
for row in rows:
|
||||
db_id, polling_url = str(row[0]), row[1]
|
||||
try:
|
||||
result = await openrouter.poll_video_status(polling_url)
|
||||
except Exception as exc:
|
||||
logger.warning("Polling failed for job %s: %s", db_id, exc)
|
||||
continue
|
||||
|
||||
job_status = result.get("status", "processing")
|
||||
if job_status not in ("completed", "failed"):
|
||||
continue # still in-progress — check again next tick
|
||||
|
||||
urls = result.get("unsigned_urls") or result.get("video_urls")
|
||||
video_url = (urls or [None])[0]
|
||||
error_msg = result.get("error")
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
|
||||
async with lock:
|
||||
conn.execute(
|
||||
"""UPDATE generated_videos
|
||||
SET status = ?, video_url = ?, error = ?, updated_at = ?
|
||||
WHERE id = ?""",
|
||||
[job_status, video_url, error_msg, now, db_id],
|
||||
)
|
||||
updated += 1
|
||||
logger.info("Video job %s → %s", db_id, job_status)
|
||||
|
||||
return updated
|
||||
|
||||
|
||||
async def worker_tick(conn: duckdb.DuckDBPyConnection, lock: asyncio.Lock) -> None:
|
||||
"""Single worker tick: submit queued, poll processing, expire timed-out."""
|
||||
queued = await process_queued_jobs(conn, lock)
|
||||
polled = await process_processing_jobs(conn, lock)
|
||||
async with lock:
|
||||
timed_out = mark_timed_out_video_jobs(conn, timeout_minutes=120)
|
||||
if queued or polled or timed_out:
|
||||
logger.info(
|
||||
"Worker tick: submitted=%d polled=%d timed_out=%d",
|
||||
queued, polled, timed_out,
|
||||
)
|
||||
|
||||
|
||||
async def run_worker(conn: duckdb.DuckDBPyConnection, lock: asyncio.Lock) -> None:
|
||||
"""Infinite loop: run a worker tick every WORKER_INTERVAL seconds."""
|
||||
logger.info("Video worker started (interval=%ds)", WORKER_INTERVAL)
|
||||
while True:
|
||||
try:
|
||||
await worker_tick(conn, lock)
|
||||
except asyncio.CancelledError:
|
||||
logger.info("Video worker stopped.")
|
||||
return
|
||||
except Exception as exc:
|
||||
logger.exception("Unexpected error in video worker: %s", exc)
|
||||
await asyncio.sleep(WORKER_INTERVAL)
|
||||
@@ -0,0 +1,13 @@
|
||||
# Dev-only dependencies for local development and testing
|
||||
# Production dependencies are in requirements.txt
|
||||
|
||||
pytest
|
||||
pytest-asyncio
|
||||
Flask
|
||||
gunicorn
|
||||
Pygments
|
||||
tomli
|
||||
exceptiongroup
|
||||
iniconfig
|
||||
pluggy
|
||||
|
||||
@@ -0,0 +1,21 @@
|
||||
anyio
|
||||
bcrypt==4.0.1
|
||||
blinker
|
||||
certifi
|
||||
cryptography
|
||||
dnspython
|
||||
duckdb
|
||||
ecdsa
|
||||
email-validator
|
||||
fastapi
|
||||
httpcore
|
||||
httpx
|
||||
Jinja2
|
||||
MarkupSafe
|
||||
packaging
|
||||
passlib==1.7.4
|
||||
pydantic
|
||||
python-dotenv
|
||||
python-jose
|
||||
python-multipart
|
||||
uvicorn
|
||||
@@ -0,0 +1,119 @@
|
||||
"""Integration tests for admin endpoints."""
|
||||
import os
|
||||
import pytest
|
||||
import pytest_asyncio
|
||||
from httpx import AsyncClient, ASGITransport
|
||||
|
||||
from app.main import app
|
||||
from app import db as db_module
|
||||
|
||||
os.environ.setdefault("JWT_SECRET", "test-secret-key-for-testing-only")
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def fresh_db():
|
||||
db_module._conn = None
|
||||
db_module.init_db(":memory:")
|
||||
yield
|
||||
db_module.close_db()
|
||||
db_module._conn = None
|
||||
|
||||
|
||||
@pytest_asyncio.fixture
|
||||
async def client(fresh_db):
|
||||
transport = ASGITransport(app=app)
|
||||
async with AsyncClient(transport=transport, base_url="http://test") as ac:
|
||||
yield ac
|
||||
|
||||
|
||||
async def _register_login(client, email="user@example.com", password="secret123"):
|
||||
await client.post("/auth/register", json={"email": email, "password": password})
|
||||
resp = await client.post("/auth/login", json={"email": email, "password": password})
|
||||
return resp.json()
|
||||
|
||||
|
||||
async def _admin_token(client, email="admin@example.com", password="secret123"):
|
||||
tokens = await _register_login(client, email, password)
|
||||
me = await client.get("/users/me", headers={"Authorization": f"Bearer {tokens['access_token']}"})
|
||||
db_module.get_conn().execute(
|
||||
"UPDATE users SET role = 'admin' WHERE id = ?", [me.json()["id"]])
|
||||
login = await client.post("/auth/login", json={"email": email, "password": password})
|
||||
return login.json()["access_token"]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# GET /admin/stats
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_stats_as_admin(client):
|
||||
await _register_login(client, "user1@example.com")
|
||||
await _register_login(client, "user2@example.com")
|
||||
token = await _admin_token(client)
|
||||
|
||||
resp = await client.get("/admin/stats", headers={"Authorization": f"Bearer {token}"})
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
# 2 users + 1 admin + 1 seeded admin (ai@allucanget.biz)
|
||||
assert data["users"]["total"] == 4
|
||||
assert "by_role" in data["users"]
|
||||
assert "refresh_tokens" in data
|
||||
|
||||
|
||||
async def test_stats_as_regular_user(client):
|
||||
tokens = await _register_login(client)
|
||||
resp = await client.get("/admin/stats", headers={"Authorization": f"Bearer {tokens['access_token']}"})
|
||||
assert resp.status_code == 403
|
||||
|
||||
|
||||
async def test_stats_unauthenticated(client):
|
||||
resp = await client.get("/admin/stats")
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# GET /admin/health/db
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_db_health_as_admin(client):
|
||||
token = await _admin_token(client)
|
||||
resp = await client.get("/admin/health/db", headers={"Authorization": f"Bearer {token}"})
|
||||
assert resp.status_code == 200
|
||||
assert resp.json()["status"] == "ok"
|
||||
|
||||
|
||||
async def test_db_health_as_regular_user(client):
|
||||
tokens = await _register_login(client)
|
||||
resp = await client.get("/admin/health/db", headers={"Authorization": f"Bearer {tokens['access_token']}"})
|
||||
assert resp.status_code == 403
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# POST /admin/tokens/purge
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_purge_removes_revoked_tokens(client):
|
||||
tokens = await _register_login(client, "user@example.com")
|
||||
refresh_token = tokens["refresh_token"]
|
||||
|
||||
# Logout to revoke the token
|
||||
await client.post("/auth/logout", json={"refresh_token": refresh_token})
|
||||
|
||||
token = await _admin_token(client)
|
||||
resp = await client.post("/admin/tokens/purge", headers={"Authorization": f"Bearer {token}"})
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert data["deleted"] >= 1
|
||||
|
||||
|
||||
async def test_purge_nothing_to_remove(client):
|
||||
token = await _admin_token(client)
|
||||
resp = await client.post("/admin/tokens/purge", headers={"Authorization": f"Bearer {token}"})
|
||||
assert resp.status_code == 200
|
||||
# Admin login issued one active token — nothing to purge
|
||||
assert resp.json()["deleted"] == 0
|
||||
|
||||
|
||||
async def test_purge_as_regular_user(client):
|
||||
tokens = await _register_login(client)
|
||||
resp = await client.post("/admin/tokens/purge", headers={"Authorization": f"Bearer {tokens['access_token']}"})
|
||||
assert resp.status_code == 403
|
||||
@@ -0,0 +1,172 @@
|
||||
"""Tests for AI endpoints — OpenRouter HTTP calls are fully mocked."""
|
||||
import os
|
||||
import pytest
|
||||
import pytest_asyncio
|
||||
from unittest.mock import AsyncMock, patch
|
||||
from httpx import AsyncClient, ASGITransport
|
||||
|
||||
from app.main import app
|
||||
from app import db as db_module
|
||||
|
||||
os.environ.setdefault("JWT_SECRET", "test-secret-key-for-testing-only")
|
||||
os.environ.setdefault("OPENROUTER_API_KEY", "test-key")
|
||||
|
||||
FAKE_MODELS = [
|
||||
{"id": "openai/gpt-4o", "name": "GPT-4o", "context_length": 128000, "pricing": {"prompt": "0.000005"}},
|
||||
{"id": "anthropic/claude-3-haiku", "name": "Claude 3 Haiku", "context_length": 200000, "pricing": {}},
|
||||
]
|
||||
|
||||
FAKE_CHAT_RESPONSE = {
|
||||
"id": "gen-abc123",
|
||||
"model": "openai/gpt-4o",
|
||||
"choices": [{"message": {"role": "assistant", "content": "Hello! How can I help?"}}],
|
||||
"usage": {"prompt_tokens": 10, "completion_tokens": 8, "total_tokens": 18},
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def fresh_db():
|
||||
db_module._conn = None
|
||||
db_module.init_db(":memory:")
|
||||
yield
|
||||
db_module.close_db()
|
||||
db_module._conn = None
|
||||
|
||||
|
||||
@pytest_asyncio.fixture
|
||||
async def client(fresh_db):
|
||||
transport = ASGITransport(app=app)
|
||||
async with AsyncClient(transport=transport, base_url="http://test") as ac:
|
||||
yield ac
|
||||
|
||||
|
||||
async def _user_token(client):
|
||||
await client.post("/auth/register", json={"email": "user@example.com", "password": "secret123"})
|
||||
resp = await client.post("/auth/login", json={"email": "user@example.com", "password": "secret123"})
|
||||
return resp.json()["access_token"]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# GET /ai/models
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_list_models(client):
|
||||
token = await _user_token(client)
|
||||
with patch(
|
||||
"app.routers.ai.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
return_value=FAKE_MODELS,
|
||||
):
|
||||
resp = await client.get("/ai/models", headers={"Authorization": f"Bearer {token}"})
|
||||
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert len(data) == 2
|
||||
assert data[0]["id"] == "openai/gpt-4o"
|
||||
assert data[1]["name"] == "Claude 3 Haiku"
|
||||
|
||||
|
||||
async def test_list_models_unauthenticated(client):
|
||||
resp = await client.get("/ai/models")
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
async def test_list_models_upstream_error(client):
|
||||
token = await _user_token(client)
|
||||
with patch(
|
||||
"app.routers.ai.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
side_effect=Exception("Connection refused"),
|
||||
):
|
||||
resp = await client.get("/ai/models", headers={"Authorization": f"Bearer {token}"})
|
||||
|
||||
assert resp.status_code == 502
|
||||
assert "OpenRouter error" in resp.json()["detail"]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# POST /ai/chat
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_chat_success(client):
|
||||
token = await _user_token(client)
|
||||
with patch(
|
||||
"app.routers.ai.openrouter.chat_completion",
|
||||
new_callable=AsyncMock,
|
||||
return_value=FAKE_CHAT_RESPONSE,
|
||||
):
|
||||
resp = await client.post(
|
||||
"/ai/chat",
|
||||
json={
|
||||
"model": "openai/gpt-4o",
|
||||
"messages": [{"role": "user", "content": "Hello"}],
|
||||
},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert data["id"] == "gen-abc123"
|
||||
assert data["model"] == "openai/gpt-4o"
|
||||
assert data["content"] == "Hello! How can I help?"
|
||||
assert data["usage"]["total_tokens"] == 18
|
||||
|
||||
|
||||
async def test_chat_passes_parameters(client):
|
||||
token = await _user_token(client)
|
||||
mock = AsyncMock(return_value=FAKE_CHAT_RESPONSE)
|
||||
with patch("app.routers.ai.openrouter.chat_completion", new_callable=AsyncMock, return_value=FAKE_CHAT_RESPONSE) as mock:
|
||||
await client.post(
|
||||
"/ai/chat",
|
||||
json={
|
||||
"model": "anthropic/claude-3-haiku",
|
||||
"messages": [{"role": "user", "content": "Hi"}],
|
||||
"temperature": 0.3,
|
||||
"max_tokens": 512,
|
||||
},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
mock.assert_called_once_with(
|
||||
model="anthropic/claude-3-haiku",
|
||||
messages=[{"role": "user", "content": "Hi"}],
|
||||
temperature=0.3,
|
||||
max_tokens=512,
|
||||
)
|
||||
|
||||
|
||||
async def test_chat_unauthenticated(client):
|
||||
resp = await client.post(
|
||||
"/ai/chat",
|
||||
json={"model": "openai/gpt-4o", "messages": [{"role": "user", "content": "Hi"}]},
|
||||
)
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
async def test_chat_upstream_error(client):
|
||||
token = await _user_token(client)
|
||||
with patch(
|
||||
"app.routers.ai.openrouter.chat_completion",
|
||||
new_callable=AsyncMock,
|
||||
side_effect=Exception("timeout"),
|
||||
):
|
||||
resp = await client.post(
|
||||
"/ai/chat",
|
||||
json={"model": "openai/gpt-4o", "messages": [{"role": "user", "content": "Hi"}]},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 502
|
||||
|
||||
|
||||
async def test_chat_malformed_upstream_response(client):
|
||||
token = await _user_token(client)
|
||||
with patch(
|
||||
"app.routers.ai.openrouter.chat_completion",
|
||||
new_callable=AsyncMock,
|
||||
return_value={"id": "x", "choices": []}, # empty choices
|
||||
):
|
||||
resp = await client.post(
|
||||
"/ai/chat",
|
||||
json={"model": "openai/gpt-4o", "messages": [{"role": "user", "content": "Hi"}]},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 502
|
||||
@@ -0,0 +1,103 @@
|
||||
"""Integration tests for auth endpoints using in-memory DuckDB."""
|
||||
from app.main import app
|
||||
from app import db as db_module
|
||||
from httpx import AsyncClient, ASGITransport
|
||||
import os
|
||||
import pytest
|
||||
import pytest_asyncio
|
||||
|
||||
os.environ.setdefault("JWT_SECRET", "test-secret-key-for-testing-only")
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def fresh_db():
|
||||
"""Reset the DB singleton to a fresh in-memory DB for each test."""
|
||||
db_module._conn = None
|
||||
db_module.init_db(":memory:")
|
||||
yield
|
||||
db_module.close_db()
|
||||
db_module._conn = None
|
||||
|
||||
|
||||
@pytest_asyncio.fixture
|
||||
async def client(fresh_db):
|
||||
"""HTTP client wired to the app; explicitly depends on fresh_db to guarantee ordering."""
|
||||
transport = ASGITransport(app=app)
|
||||
async with AsyncClient(transport=transport, base_url="http://test") as ac:
|
||||
yield ac
|
||||
|
||||
|
||||
async def test_register_success(client):
|
||||
resp = await client.post("/auth/register", json={"email": "user@example.com", "password": "secret123"})
|
||||
assert resp.status_code == 201
|
||||
data = resp.json()
|
||||
assert data["email"] == "user@example.com"
|
||||
assert data["role"] == "user"
|
||||
assert "id" in data
|
||||
|
||||
|
||||
async def test_register_duplicate_email(client):
|
||||
payload = {"email": "dup@example.com", "password": "secret123"}
|
||||
await client.post("/auth/register", json=payload)
|
||||
resp = await client.post("/auth/register", json=payload)
|
||||
assert resp.status_code == 409
|
||||
|
||||
|
||||
async def test_login_success(client):
|
||||
await client.post("/auth/register", json={"email": "user@example.com", "password": "secret123"})
|
||||
resp = await client.post("/auth/login", json={"email": "user@example.com", "password": "secret123"})
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert "access_token" in data
|
||||
assert "refresh_token" in data
|
||||
assert data["token_type"] == "bearer"
|
||||
|
||||
|
||||
async def test_login_wrong_password(client):
|
||||
await client.post("/auth/register", json={"email": "user@example.com", "password": "secret123"})
|
||||
resp = await client.post("/auth/login", json={"email": "user@example.com", "password": "wrong"})
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
async def test_login_unknown_user(client):
|
||||
resp = await client.post("/auth/login", json={"email": "nobody@example.com", "password": "x"})
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
async def test_refresh_success(client):
|
||||
await client.post("/auth/register", json={"email": "user@example.com", "password": "secret123"})
|
||||
login = await client.post("/auth/login", json={"email": "user@example.com", "password": "secret123"})
|
||||
refresh_token = login.json()["refresh_token"]
|
||||
|
||||
resp = await client.post("/auth/refresh", json={"refresh_token": refresh_token})
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert "access_token" in data
|
||||
assert "refresh_token" in data
|
||||
# New refresh token must differ (rotation)
|
||||
assert data["refresh_token"] != refresh_token
|
||||
|
||||
|
||||
async def test_refresh_revoked_token(client):
|
||||
await client.post("/auth/register", json={"email": "user@example.com", "password": "secret123"})
|
||||
login = await client.post("/auth/login", json={"email": "user@example.com", "password": "secret123"})
|
||||
refresh_token = login.json()["refresh_token"]
|
||||
|
||||
# Use once (rotates)
|
||||
await client.post("/auth/refresh", json={"refresh_token": refresh_token})
|
||||
# Try to reuse old token — must fail
|
||||
resp = await client.post("/auth/refresh", json={"refresh_token": refresh_token})
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
async def test_logout_success(client):
|
||||
await client.post("/auth/register", json={"email": "user@example.com", "password": "secret123"})
|
||||
login = await client.post("/auth/login", json={"email": "user@example.com", "password": "secret123"})
|
||||
refresh_token = login.json()["refresh_token"]
|
||||
|
||||
resp = await client.post("/auth/logout", json={"refresh_token": refresh_token})
|
||||
assert resp.status_code == 204
|
||||
|
||||
# Refresh after logout must fail
|
||||
resp2 = await client.post("/auth/refresh", json={"refresh_token": refresh_token})
|
||||
assert resp2.status_code == 401
|
||||
@@ -0,0 +1,230 @@
|
||||
"""Tests for DuckDB initialization and schema."""
|
||||
import asyncio
|
||||
import pytest
|
||||
import duckdb
|
||||
|
||||
from app import db as db_module
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def fresh_db():
|
||||
"""Use an in-memory DB for each test and reset global state."""
|
||||
db_module._conn = None
|
||||
yield
|
||||
db_module.close_db()
|
||||
db_module._conn = None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Lifecycle
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_init_creates_users_table():
|
||||
conn = db_module.init_db(":memory:")
|
||||
result = conn.execute(
|
||||
"SELECT table_name FROM information_schema.tables WHERE table_name = 'users'"
|
||||
).fetchone()
|
||||
assert result is not None
|
||||
|
||||
|
||||
def test_init_creates_refresh_tokens_table():
|
||||
conn = db_module.init_db(":memory:")
|
||||
result = conn.execute(
|
||||
"SELECT table_name FROM information_schema.tables WHERE table_name = 'refresh_tokens'"
|
||||
).fetchone()
|
||||
assert result is not None
|
||||
|
||||
|
||||
def test_init_is_idempotent():
|
||||
conn1 = db_module.init_db(":memory:")
|
||||
conn2 = db_module.init_db(":memory:")
|
||||
assert conn1 is conn2
|
||||
|
||||
|
||||
def test_get_conn_raises_before_init():
|
||||
with pytest.raises(RuntimeError, match="not initialised"):
|
||||
db_module.get_conn()
|
||||
|
||||
|
||||
def test_get_conn_returns_connection_after_init():
|
||||
db_module.init_db(":memory:")
|
||||
conn = db_module.get_conn()
|
||||
assert conn is not None
|
||||
|
||||
|
||||
def test_close_db_resets_connection():
|
||||
db_module.init_db(":memory:")
|
||||
db_module.close_db()
|
||||
with pytest.raises(RuntimeError):
|
||||
db_module.get_conn()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Schema: users table columns and defaults
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_users_columns():
|
||||
conn = db_module.init_db(":memory:")
|
||||
cols = {
|
||||
row[0]
|
||||
for row in conn.execute(
|
||||
"SELECT column_name FROM information_schema.columns WHERE table_name = 'users'"
|
||||
).fetchall()
|
||||
}
|
||||
assert {"id", "email", "password_hash",
|
||||
"role", "created_at", "updated_at"} <= cols
|
||||
|
||||
|
||||
def test_users_default_role_is_user():
|
||||
conn = db_module.init_db(":memory:")
|
||||
conn.execute(
|
||||
"INSERT INTO users (email, password_hash) VALUES ('a@example.com', 'hash')"
|
||||
)
|
||||
row = conn.execute(
|
||||
"SELECT role FROM users WHERE email = 'a@example.com'").fetchone()
|
||||
assert row[0] == "user"
|
||||
|
||||
|
||||
def test_users_id_auto_generated():
|
||||
conn = db_module.init_db(":memory:")
|
||||
conn.execute(
|
||||
"INSERT INTO users (email, password_hash) VALUES ('b@example.com', 'hash')"
|
||||
)
|
||||
row = conn.execute(
|
||||
"SELECT id FROM users WHERE email = 'b@example.com'").fetchone()
|
||||
assert row[0] is not None
|
||||
|
||||
|
||||
def test_users_email_unique_constraint():
|
||||
conn = db_module.init_db(":memory:")
|
||||
conn.execute(
|
||||
"INSERT INTO users (email, password_hash) VALUES ('c@example.com', 'h')")
|
||||
with pytest.raises(Exception):
|
||||
conn.execute(
|
||||
"INSERT INTO users (email, password_hash) VALUES ('c@example.com', 'h2')")
|
||||
|
||||
|
||||
def test_users_timestamps_auto_set():
|
||||
conn = db_module.init_db(":memory:")
|
||||
conn.execute(
|
||||
"INSERT INTO users (email, password_hash) VALUES ('d@example.com', 'hash')"
|
||||
)
|
||||
row = conn.execute(
|
||||
"SELECT created_at, updated_at FROM users WHERE email = 'd@example.com'"
|
||||
).fetchone()
|
||||
assert row[0] is not None
|
||||
assert row[1] is not None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Schema: refresh_tokens table columns and defaults
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_refresh_tokens_columns():
|
||||
conn = db_module.init_db(":memory:")
|
||||
cols = {
|
||||
row[0]
|
||||
for row in conn.execute(
|
||||
"SELECT column_name FROM information_schema.columns WHERE table_name = 'refresh_tokens'"
|
||||
).fetchall()
|
||||
}
|
||||
assert {"jti", "user_id", "issued_at", "expires_at", "revoked"} <= cols
|
||||
|
||||
|
||||
def test_refresh_tokens_default_revoked_false():
|
||||
conn = db_module.init_db(":memory:")
|
||||
conn.execute(
|
||||
"INSERT INTO users (email, password_hash) VALUES ('e@example.com', 'h')")
|
||||
user_id = conn.execute(
|
||||
"SELECT id FROM users WHERE email = 'e@example.com'").fetchone()[0]
|
||||
conn.execute(
|
||||
"INSERT INTO refresh_tokens (user_id, expires_at) VALUES (?, now() + INTERVAL 7 DAY)",
|
||||
[user_id],
|
||||
)
|
||||
row = conn.execute("SELECT revoked FROM refresh_tokens WHERE user_id = ?", [
|
||||
user_id]).fetchone()
|
||||
assert row[0] is False
|
||||
|
||||
|
||||
def test_refresh_tokens_jti_auto_generated():
|
||||
conn = db_module.init_db(":memory:")
|
||||
conn.execute(
|
||||
"INSERT INTO users (email, password_hash) VALUES ('f@example.com', 'h')")
|
||||
user_id = conn.execute(
|
||||
"SELECT id FROM users WHERE email = 'f@example.com'").fetchone()[0]
|
||||
conn.execute(
|
||||
"INSERT INTO refresh_tokens (user_id, expires_at) VALUES (?, now() + INTERVAL 7 DAY)",
|
||||
[user_id],
|
||||
)
|
||||
row = conn.execute("SELECT jti FROM refresh_tokens WHERE user_id = ?", [
|
||||
user_id]).fetchone()
|
||||
assert row[0] is not None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Write lock
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_get_write_lock_returns_asyncio_lock():
|
||||
lock = db_module.get_write_lock()
|
||||
assert isinstance(lock, asyncio.Lock)
|
||||
|
||||
|
||||
def test_get_write_lock_returns_same_instance():
|
||||
lock1 = db_module.get_write_lock()
|
||||
lock2 = db_module.get_write_lock()
|
||||
assert lock1 is lock2
|
||||
|
||||
|
||||
async def test_write_lock_serialises_concurrent_writes():
|
||||
"""Two coroutines acquiring the lock must not overlap."""
|
||||
conn = db_module.init_db(":memory:")
|
||||
lock = db_module.get_write_lock()
|
||||
order = []
|
||||
|
||||
async def writer(label: str):
|
||||
async with lock:
|
||||
order.append(f"{label}-start")
|
||||
await asyncio.sleep(0) # yield to event loop
|
||||
order.append(f"{label}-end")
|
||||
|
||||
await asyncio.gather(writer("A"), writer("B"))
|
||||
# Each writer's start and end must be adjacent (no interleaving)
|
||||
assert order.index("A-start") + 1 == order.index("A-end") or \
|
||||
order.index("B-start") + 1 == order.index("B-end")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Admin seed user
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_seed_admin_user_created_on_init():
|
||||
import os
|
||||
conn = db_module.init_db(":memory:")
|
||||
row = conn.execute(
|
||||
"SELECT email, role FROM users WHERE email = 'ai@allucanget.biz'"
|
||||
).fetchone()
|
||||
assert row is not None
|
||||
assert row[0] == "ai@allucanget.biz"
|
||||
assert row[1] == "admin"
|
||||
|
||||
|
||||
def test_seed_admin_is_idempotent():
|
||||
conn = db_module.init_db(":memory:")
|
||||
# Simulate re-running seed (second init_db call reuses connection, so call _seed_admin directly)
|
||||
db_module._seed_admin(conn)
|
||||
count = conn.execute(
|
||||
"SELECT COUNT(*) FROM users WHERE email = 'ai@allucanget.biz'"
|
||||
).fetchone()[0]
|
||||
assert count == 1
|
||||
|
||||
|
||||
def test_seed_admin_email_env_override(monkeypatch):
|
||||
monkeypatch.setenv("ADMIN_EMAIL", "custom@example.com")
|
||||
monkeypatch.setenv("ADMIN_PASSWORD", "custompass")
|
||||
conn = db_module.init_db(":memory:")
|
||||
row = conn.execute(
|
||||
"SELECT email, role FROM users WHERE email = 'custom@example.com'"
|
||||
).fetchone()
|
||||
assert row is not None
|
||||
assert row[1] == "admin"
|
||||
@@ -0,0 +1,591 @@
|
||||
"""Tests for generate endpoints — all OpenRouter calls mocked."""
|
||||
import os
|
||||
import pytest
|
||||
import pytest_asyncio
|
||||
from unittest.mock import AsyncMock, patch
|
||||
from httpx import AsyncClient, ASGITransport
|
||||
|
||||
from app.main import app
|
||||
from app import db as db_module
|
||||
|
||||
os.environ.setdefault("JWT_SECRET", "test-secret-key-for-testing-only")
|
||||
os.environ.setdefault("OPENROUTER_API_KEY", "test-key")
|
||||
|
||||
FAKE_CHAT = {
|
||||
"id": "gen-text-1",
|
||||
"model": "openai/gpt-4o",
|
||||
"choices": [{"message": {"role": "assistant", "content": "Once upon a time..."}}],
|
||||
"usage": {"prompt_tokens": 5, "completion_tokens": 10, "total_tokens": 15},
|
||||
}
|
||||
|
||||
FAKE_VIDEO = {
|
||||
"id": "gen-vid-1",
|
||||
"polling_url": "https://openrouter.ai/api/v1/videos/gen-vid-1",
|
||||
"status": "queued",
|
||||
}
|
||||
|
||||
FAKE_VIDEO_DONE = {
|
||||
"id": "gen-vid-2",
|
||||
"polling_url": "https://openrouter.ai/api/v1/videos/gen-vid-2",
|
||||
"status": "completed",
|
||||
"unsigned_urls": ["https://example.com/video.mp4"],
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def fresh_db():
|
||||
db_module._conn = None
|
||||
db_module.init_db(":memory:")
|
||||
yield
|
||||
db_module.close_db()
|
||||
db_module._conn = None
|
||||
|
||||
|
||||
@pytest_asyncio.fixture
|
||||
async def client(fresh_db):
|
||||
transport = ASGITransport(app=app)
|
||||
async with AsyncClient(transport=transport, base_url="http://test") as ac:
|
||||
yield ac
|
||||
|
||||
|
||||
async def _user_token(client):
|
||||
await client.post("/auth/register", json={"email": "user@example.com", "password": "secret123"})
|
||||
resp = await client.post("/auth/login", json={"email": "user@example.com", "password": "secret123"})
|
||||
return resp.json()["access_token"]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# POST /generate/text
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_generate_text(client):
|
||||
token = await _user_token(client)
|
||||
with patch("app.routers.generate.openrouter.chat_completion", new_callable=AsyncMock, return_value=FAKE_CHAT):
|
||||
resp = await client.post(
|
||||
"/generate/text",
|
||||
json={"model": "openai/gpt-4o", "prompt": "Tell me a story"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert data["content"] == "Once upon a time..."
|
||||
assert data["id"] == "gen-text-1"
|
||||
assert data["usage"]["total_tokens"] == 15
|
||||
|
||||
|
||||
async def test_generate_text_with_system_prompt(client):
|
||||
token = await _user_token(client)
|
||||
mock = AsyncMock(return_value=FAKE_CHAT)
|
||||
with patch("app.routers.generate.openrouter.chat_completion", mock):
|
||||
await client.post(
|
||||
"/generate/text",
|
||||
json={"model": "openai/gpt-4o", "prompt": "Hello",
|
||||
"system_prompt": "Be concise."},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
call_messages = mock.call_args.kwargs["messages"]
|
||||
assert call_messages[0] == {"role": "system", "content": "Be concise."}
|
||||
assert call_messages[1] == {"role": "user", "content": "Hello"}
|
||||
|
||||
|
||||
async def test_generate_text_with_messages_array(client):
|
||||
"""messages field takes precedence over prompt for multi-turn chat."""
|
||||
token = await _user_token(client)
|
||||
mock = AsyncMock(return_value=FAKE_CHAT)
|
||||
messages = [
|
||||
{"role": "user", "content": "First message"},
|
||||
{"role": "assistant", "content": "Reply"},
|
||||
{"role": "user", "content": "Follow up"},
|
||||
]
|
||||
with patch("app.routers.generate.openrouter.chat_completion", mock):
|
||||
resp = await client.post(
|
||||
"/generate/text",
|
||||
json={"model": "openai/gpt-4o", "messages": messages},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
call_messages = mock.call_args.kwargs["messages"]
|
||||
assert len(call_messages) == 3
|
||||
assert call_messages[2]["content"] == "Follow up"
|
||||
|
||||
|
||||
async def test_generate_text_messages_with_system_prompt(client):
|
||||
"""system_prompt prepended when messages provided and no system msg present."""
|
||||
token = await _user_token(client)
|
||||
mock = AsyncMock(return_value=FAKE_CHAT)
|
||||
messages = [{"role": "user", "content": "Hi"}]
|
||||
with patch("app.routers.generate.openrouter.chat_completion", mock):
|
||||
await client.post(
|
||||
"/generate/text",
|
||||
json={"model": "openai/gpt-4o", "messages": messages,
|
||||
"system_prompt": "Be brief."},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
call_messages = mock.call_args.kwargs["messages"]
|
||||
assert call_messages[0] == {"role": "system", "content": "Be brief."}
|
||||
assert call_messages[1] == {"role": "user", "content": "Hi"}
|
||||
|
||||
|
||||
async def test_generate_text_unauthenticated(client):
|
||||
resp = await client.post("/generate/text", json={"model": "openai/gpt-4o", "prompt": "Hi"})
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
async def test_generate_text_upstream_error(client):
|
||||
token = await _user_token(client)
|
||||
with patch("app.routers.generate.openrouter.chat_completion", new_callable=AsyncMock, side_effect=Exception("timeout")):
|
||||
resp = await client.post(
|
||||
"/generate/text",
|
||||
json={"model": "openai/gpt-4o", "prompt": "Hi"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 502
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# POST /generate/image
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
FAKE_IMAGE_CHAT_FLUX = {
|
||||
"id": "gen-img-chat-1",
|
||||
"model": "black-forest-labs/flux.2-klein-4b",
|
||||
"choices": [{
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": None,
|
||||
"images": [{
|
||||
"type": "image_url",
|
||||
"image_url": {"url": "data:image/png;base64,abc123"},
|
||||
}],
|
||||
}
|
||||
}],
|
||||
}
|
||||
|
||||
FAKE_IMAGE_CHAT_GPT5 = {
|
||||
"id": "gen-img-chat-2",
|
||||
"model": "openai/gpt-5-image-mini",
|
||||
"choices": [{
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": "Generated image.",
|
||||
"images": [{
|
||||
"type": "image_url",
|
||||
"image_url": {"url": "data:image/png;base64,xyz789"},
|
||||
}],
|
||||
}
|
||||
}],
|
||||
}
|
||||
|
||||
FAKE_IMAGE_CHAT_GEMINI = {
|
||||
"id": "gen-img-chat-3",
|
||||
"model": "google/gemini-2.5-flash-image",
|
||||
"choices": [{
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": "Here is your image.",
|
||||
"images": [{
|
||||
"type": "image_url",
|
||||
"image_url": {"url": "data:image/png;base64,gemini123"},
|
||||
}],
|
||||
}
|
||||
}],
|
||||
}
|
||||
|
||||
|
||||
async def test_generate_image(client):
|
||||
"""All models now use generate_image_chat (chat completions endpoint)."""
|
||||
token = await _user_token(client)
|
||||
with patch("app.routers.generate.openrouter.generate_image_chat", new_callable=AsyncMock, return_value=FAKE_IMAGE_CHAT_GEMINI):
|
||||
resp = await client.post(
|
||||
"/generate/image",
|
||||
json={"model": "google/gemini-2.5-flash-image",
|
||||
"prompt": "A cat on the moon"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert data["id"] == "gen-img-chat-3"
|
||||
assert len(data["images"]) == 1
|
||||
assert data["images"][0]["url"] == "data:image/png;base64,gemini123"
|
||||
assert data["images"][0]["image_id"] is not None # stored in DB
|
||||
|
||||
|
||||
async def test_generate_image_unauthenticated(client):
|
||||
resp = await client.post("/generate/image", json={"model": "google/gemini-2.5-flash-image", "prompt": "Hi"})
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
async def test_generate_image_upstream_error(client):
|
||||
token = await _user_token(client)
|
||||
with patch("app.routers.generate.openrouter.generate_image_chat", new_callable=AsyncMock, side_effect=Exception("rate limit")):
|
||||
resp = await client.post(
|
||||
"/generate/image",
|
||||
json={"model": "google/gemini-2.5-flash-image", "prompt": "Hi"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 502
|
||||
|
||||
|
||||
async def test_generate_image_with_image_config(client):
|
||||
"""Passes aspect_ratio + image_size through to generate_image_chat."""
|
||||
token = await _user_token(client)
|
||||
mock = AsyncMock(return_value=FAKE_IMAGE_CHAT_GEMINI)
|
||||
with patch("app.routers.generate.openrouter.generate_image_chat", mock):
|
||||
await client.post(
|
||||
"/generate/image",
|
||||
json={
|
||||
"model": "google/gemini-2.5-flash-image",
|
||||
"prompt": "A landscape",
|
||||
"aspect_ratio": "16:9",
|
||||
"image_size": "2K",
|
||||
},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
call_kwargs = mock.call_args.kwargs
|
||||
assert call_kwargs["image_config"]["aspect_ratio"] == "16:9"
|
||||
assert call_kwargs["image_config"]["image_size"] == "2K"
|
||||
|
||||
|
||||
async def test_generate_image_default_modalities_image_text(client):
|
||||
"""Model not in cache → default modalities = ['image', 'text']."""
|
||||
token = await _user_token(client)
|
||||
mock = AsyncMock(return_value=FAKE_IMAGE_CHAT_GEMINI)
|
||||
with patch("app.routers.generate.openrouter.generate_image_chat", mock):
|
||||
await client.post(
|
||||
"/generate/image",
|
||||
json={"model": "google/gemini-2.5-flash-image", "prompt": "Hi"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert mock.call_args.kwargs["modalities"] == ["image", "text"]
|
||||
|
||||
|
||||
async def test_generate_image_image_only_modalities_from_cache(client):
|
||||
"""Model cached with image-only output_modalities → modalities = ['image']."""
|
||||
from app import db as db_module
|
||||
from app.services.models import get_model_output_modalities
|
||||
import json as _json
|
||||
token = await _user_token(client)
|
||||
|
||||
# Seed cache with image-only model
|
||||
conn = db_module.get_conn()
|
||||
from datetime import datetime, timezone
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
conn.execute(
|
||||
"DELETE FROM models_cache WHERE model_id = 'black-forest-labs/flux.2-pro'"
|
||||
)
|
||||
conn.execute(
|
||||
"""INSERT INTO models_cache (model_id, name, modality, context_length, pricing, fetched_at, output_modalities)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?)""",
|
||||
["black-forest-labs/flux.2-pro", "FLUX.2 Pro", "image", None, None, now,
|
||||
_json.dumps(["image"])],
|
||||
)
|
||||
|
||||
mock = AsyncMock(return_value=FAKE_IMAGE_CHAT_FLUX)
|
||||
with patch("app.routers.generate.openrouter.generate_image_chat", mock):
|
||||
resp = await client.post(
|
||||
"/generate/image",
|
||||
json={"model": "black-forest-labs/flux.2-pro", "prompt": "Sky"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
assert mock.call_args.kwargs["modalities"] == ["image"]
|
||||
|
||||
|
||||
async def test_generate_image_no_images_in_response(client):
|
||||
"""502 when model returns no images."""
|
||||
token = await _user_token(client)
|
||||
empty_response = {
|
||||
"id": "gen-empty",
|
||||
"model": "google/gemini-2.5-flash-image",
|
||||
"choices": [{"message": {"role": "assistant", "content": "ok", "images": []}}],
|
||||
}
|
||||
with patch("app.routers.generate.openrouter.generate_image_chat",
|
||||
new_callable=AsyncMock, return_value=empty_response):
|
||||
resp = await client.post(
|
||||
"/generate/image",
|
||||
json={"model": "google/gemini-2.5-flash-image", "prompt": "Hi"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 502
|
||||
assert "No images returned" in resp.json()["detail"]
|
||||
|
||||
|
||||
async def test_generate_image_flux(client):
|
||||
"""Flux model works correctly via chat completions."""
|
||||
token = await _user_token(client)
|
||||
with patch("app.routers.generate.openrouter.generate_image_chat",
|
||||
new_callable=AsyncMock, return_value=FAKE_IMAGE_CHAT_FLUX):
|
||||
resp = await client.post(
|
||||
"/generate/image",
|
||||
json={"model": "black-forest-labs/flux.2-klein-4b",
|
||||
"prompt": "A sunset"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert data["images"][0]["url"] == "data:image/png;base64,abc123"
|
||||
|
||||
|
||||
async def test_generate_image_stored_in_db(client):
|
||||
"""Generated image row persists in generated_images table."""
|
||||
from app import db as db_module
|
||||
token = await _user_token(client)
|
||||
with patch("app.routers.generate.openrouter.generate_image_chat",
|
||||
new_callable=AsyncMock, return_value=FAKE_IMAGE_CHAT_GEMINI):
|
||||
resp = await client.post(
|
||||
"/generate/image",
|
||||
json={"model": "google/gemini-2.5-flash-image",
|
||||
"prompt": "A mountain"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
image_id = resp.json()["images"][0]["image_id"]
|
||||
assert image_id is not None
|
||||
|
||||
row = db_module.get_conn().execute(
|
||||
"SELECT model_id, prompt, image_data FROM generated_images WHERE id = ?",
|
||||
[image_id],
|
||||
).fetchone()
|
||||
assert row is not None
|
||||
assert row[0] == "google/gemini-2.5-flash-image"
|
||||
assert row[1] == "A mountain"
|
||||
assert row[2] == "data:image/png;base64,gemini123"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# POST /generate/video
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_generate_video(client):
|
||||
token = await _user_token(client)
|
||||
with patch("app.routers.generate.openrouter.generate_video", new_callable=AsyncMock, return_value=FAKE_VIDEO):
|
||||
resp = await client.post(
|
||||
"/generate/video",
|
||||
json={"model": "stability/stable-video",
|
||||
"prompt": "Ocean waves at sunset"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert data["id"] == "gen-vid-1"
|
||||
assert data["status"] == "queued"
|
||||
assert data["polling_url"] == "https://openrouter.ai/api/v1/videos/gen-vid-1"
|
||||
assert data["video_url"] is None
|
||||
|
||||
|
||||
async def test_generate_video_unauthenticated(client):
|
||||
resp = await client.post("/generate/video", json={"model": "m", "prompt": "p"})
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
async def test_generate_video_upstream_error(client):
|
||||
token = await _user_token(client)
|
||||
with patch("app.routers.generate.openrouter.generate_video", new_callable=AsyncMock, side_effect=Exception("503")):
|
||||
resp = await client.post(
|
||||
"/generate/video",
|
||||
json={"model": "stability/stable-video", "prompt": "Hi"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 502
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# POST /generate/video/from-image
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_generate_video_from_image(client):
|
||||
token = await _user_token(client)
|
||||
with patch("app.routers.generate.openrouter.generate_video_from_image", new_callable=AsyncMock, return_value=FAKE_VIDEO_DONE):
|
||||
resp = await client.post(
|
||||
"/generate/video/from-image",
|
||||
json={
|
||||
"model": "runway/gen-3",
|
||||
"image_url": "https://example.com/cat.jpg",
|
||||
"prompt": "Cat runs across the room",
|
||||
},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert data["status"] == "completed"
|
||||
assert data["video_url"] == "https://example.com/video.mp4"
|
||||
assert data["video_urls"] == ["https://example.com/video.mp4"]
|
||||
|
||||
|
||||
async def test_poll_video_status(client):
|
||||
token = await _user_token(client)
|
||||
mock_result = {
|
||||
"id": "gen-vid-1",
|
||||
"status": "completed",
|
||||
"unsigned_urls": ["https://example.com/video.mp4"],
|
||||
}
|
||||
with patch("app.routers.generate.openrouter.poll_video_status", new_callable=AsyncMock, return_value=mock_result):
|
||||
resp = await client.get(
|
||||
"/generate/video/status",
|
||||
params={"polling_url": "https://openrouter.ai/api/v1/videos/gen-vid-1"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert data["status"] == "completed"
|
||||
assert data["video_url"] == "https://example.com/video.mp4"
|
||||
|
||||
|
||||
async def test_poll_video_status_unauthenticated(client):
|
||||
resp = await client.get(
|
||||
"/generate/video/status",
|
||||
params={"polling_url": "https://openrouter.ai/api/v1/videos/gen-vid-1"},
|
||||
)
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
async def test_poll_video_status_upstream_error(client):
|
||||
token = await _user_token(client)
|
||||
with patch("app.routers.generate.openrouter.poll_video_status", new_callable=AsyncMock, side_effect=Exception("timeout")):
|
||||
resp = await client.get(
|
||||
"/generate/video/status",
|
||||
params={"polling_url": "https://openrouter.ai/api/v1/videos/gen-vid-1"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 502
|
||||
|
||||
|
||||
async def test_generate_video_from_image_unauthenticated(client):
|
||||
resp = await client.post(
|
||||
"/generate/video/from-image",
|
||||
json={"model": "m", "image_url": "https://example.com/img.jpg", "prompt": "p"},
|
||||
)
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Video job DB storage
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_generate_video_stored_in_db(client):
|
||||
"""Submitting a video job inserts a row into generated_videos."""
|
||||
token = await _user_token(client)
|
||||
with patch("app.routers.generate.openrouter.generate_video", new_callable=AsyncMock, return_value=FAKE_VIDEO):
|
||||
resp = await client.post(
|
||||
"/generate/video",
|
||||
json={"model": "stability/stable-video", "prompt": "Ocean waves"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
|
||||
row = db_module.get_conn().execute(
|
||||
"SELECT job_id, model_id, prompt, status FROM generated_videos WHERE job_id = ?",
|
||||
["gen-vid-1"],
|
||||
).fetchone()
|
||||
assert row is not None
|
||||
assert row[0] == "gen-vid-1"
|
||||
assert row[1] == "stability/stable-video"
|
||||
assert row[2] == "Ocean waves"
|
||||
assert row[3] == "queued"
|
||||
|
||||
|
||||
async def test_generate_video_from_image_stored_in_db(client):
|
||||
"""Submitting a from-image job inserts a row into generated_videos."""
|
||||
token = await _user_token(client)
|
||||
with patch("app.routers.generate.openrouter.generate_video_from_image", new_callable=AsyncMock, return_value=FAKE_VIDEO_DONE):
|
||||
resp = await client.post(
|
||||
"/generate/video/from-image",
|
||||
json={
|
||||
"model": "runway/gen-3",
|
||||
"image_url": "https://example.com/cat.jpg",
|
||||
"prompt": "Cat runs",
|
||||
},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
|
||||
row = db_module.get_conn().execute(
|
||||
"SELECT job_id, model_id, prompt, status FROM generated_videos WHERE job_id = ?",
|
||||
["gen-vid-2"],
|
||||
).fetchone()
|
||||
assert row is not None
|
||||
assert row[1] == "runway/gen-3"
|
||||
assert row[2] == "Cat runs"
|
||||
|
||||
|
||||
async def test_poll_video_updates_db_on_completion(client):
|
||||
"""Polling a completed job updates the row status and video_url."""
|
||||
token = await _user_token(client)
|
||||
# First submit a job
|
||||
with patch("app.routers.generate.openrouter.generate_video", new_callable=AsyncMock, return_value=FAKE_VIDEO):
|
||||
await client.post(
|
||||
"/generate/video",
|
||||
json={"model": "stability/stable-video", "prompt": "Test"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
|
||||
# Now poll and get completed status
|
||||
mock_result = {
|
||||
"id": "gen-vid-1",
|
||||
"status": "completed",
|
||||
"unsigned_urls": ["https://example.com/video.mp4"],
|
||||
}
|
||||
with patch("app.routers.generate.openrouter.poll_video_status", new_callable=AsyncMock, return_value=mock_result):
|
||||
await client.get(
|
||||
"/generate/video/status",
|
||||
params={"polling_url": "https://openrouter.ai/api/v1/videos/gen-vid-1"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
|
||||
row = db_module.get_conn().execute(
|
||||
"SELECT status, video_url FROM generated_videos WHERE job_id = ?",
|
||||
["gen-vid-1"],
|
||||
).fetchone()
|
||||
assert row is not None
|
||||
assert row[0] == "completed"
|
||||
assert row[1] == "https://example.com/video.mp4"
|
||||
|
||||
|
||||
async def test_list_generated_videos_empty(client):
|
||||
"""GET /generate/videos returns empty list initially."""
|
||||
token = await _user_token(client)
|
||||
resp = await client.get(
|
||||
"/generate/videos",
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
assert resp.json() == []
|
||||
|
||||
|
||||
async def test_list_generated_videos_returns_own_jobs(client):
|
||||
"""GET /generate/videos returns the current user's jobs only."""
|
||||
token = await _user_token(client)
|
||||
with patch("app.routers.generate.openrouter.generate_video", new_callable=AsyncMock, return_value=FAKE_VIDEO):
|
||||
await client.post(
|
||||
"/generate/video",
|
||||
json={"model": "stability/stable-video", "prompt": "Waves"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
|
||||
resp = await client.get(
|
||||
"/generate/videos",
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert len(data) == 1
|
||||
assert data[0]["job_id"] == "gen-vid-1"
|
||||
assert data[0]["prompt"] == "Waves"
|
||||
assert data[0]["status"] == "queued"
|
||||
|
||||
|
||||
async def test_list_generated_videos_unauthenticated(client):
|
||||
resp = await client.get("/generate/videos")
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
async def test_generate_video_from_image_upstream_error(client):
|
||||
token = await _user_token(client)
|
||||
with patch("app.routers.generate.openrouter.generate_video_from_image", new_callable=AsyncMock, side_effect=Exception("error")):
|
||||
resp = await client.post(
|
||||
"/generate/video/from-image",
|
||||
json={"model": "runway/gen-3",
|
||||
"image_url": "https://example.com/img.jpg", "prompt": "p"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 502
|
||||
@@ -0,0 +1,184 @@
|
||||
"""Tests for image upload and retrieval endpoints."""
|
||||
import io
|
||||
import os
|
||||
import pytest
|
||||
import pytest_asyncio
|
||||
from httpx import AsyncClient, ASGITransport
|
||||
|
||||
from app.main import app
|
||||
from app import db as db_module
|
||||
|
||||
os.environ.setdefault("JWT_SECRET", "test-secret-key-for-testing-only")
|
||||
# Use a temp dir so file I/O works without polluting project data/
|
||||
os.environ.setdefault("UPLOAD_DIR", "/tmp/test_uploads")
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def fresh_db():
|
||||
db_module._conn = None
|
||||
db_module.init_db(":memory:")
|
||||
yield
|
||||
db_module.close_db()
|
||||
db_module._conn = None
|
||||
|
||||
|
||||
@pytest_asyncio.fixture
|
||||
async def client(fresh_db):
|
||||
transport = ASGITransport(app=app)
|
||||
async with AsyncClient(transport=transport, base_url="http://test") as ac:
|
||||
yield ac
|
||||
|
||||
|
||||
async def _user_token(client) -> str:
|
||||
await client.post("/auth/register", json={"email": "user@example.com", "password": "secret123"})
|
||||
resp = await client.post("/auth/login", json={"email": "user@example.com", "password": "secret123"})
|
||||
return resp.json()["access_token"]
|
||||
|
||||
|
||||
async def _other_token(client) -> str:
|
||||
await client.post("/auth/register", json={"email": "other@example.com", "password": "secret123"})
|
||||
resp = await client.post("/auth/login", json={"email": "other@example.com", "password": "secret123"})
|
||||
return resp.json()["access_token"]
|
||||
|
||||
|
||||
def _png_bytes() -> bytes:
|
||||
"""Minimal valid 1x1 PNG."""
|
||||
return (
|
||||
b"\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00\x01\x00\x00\x00\x01"
|
||||
b"\x08\x02\x00\x00\x00\x90wS\xde\x00\x00\x00\x0cIDATx\x9cc\xf8\x0f\x00"
|
||||
b"\x00\x01\x01\x00\x05\x18\xd4n\x00\x00\x00\x00IEND\xaeB`\x82"
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# POST /images/upload
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_upload_image_success(client):
|
||||
token = await _user_token(client)
|
||||
resp = await client.post(
|
||||
"/images/upload",
|
||||
files={"file": ("test.png", io.BytesIO(_png_bytes()), "image/png")},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 201
|
||||
data = resp.json()
|
||||
assert data["filename"] == "test.png"
|
||||
assert data["content_type"] == "image/png"
|
||||
assert "id" in data
|
||||
assert data["size_bytes"] > 0
|
||||
|
||||
|
||||
async def test_upload_image_unauthenticated(client):
|
||||
resp = await client.post(
|
||||
"/images/upload",
|
||||
files={"file": ("test.png", io.BytesIO(_png_bytes()), "image/png")},
|
||||
)
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
async def test_upload_image_unsupported_type(client):
|
||||
token = await _user_token(client)
|
||||
resp = await client.post(
|
||||
"/images/upload",
|
||||
files={"file": ("doc.pdf", io.BytesIO(
|
||||
b"%PDF-fake"), "application/pdf")},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 415
|
||||
|
||||
|
||||
async def test_upload_image_too_large(client, monkeypatch):
|
||||
import app.routers.images as images_mod
|
||||
monkeypatch.setattr(images_mod, "MAX_SIZE_BYTES", 5)
|
||||
token = await _user_token(client)
|
||||
resp = await client.post(
|
||||
"/images/upload",
|
||||
files={"file": ("big.png", io.BytesIO(b"x" * 10), "image/png")},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 413
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# GET /images/
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_list_images_empty(client):
|
||||
token = await _user_token(client)
|
||||
resp = await client.get("/images/", headers={"Authorization": f"Bearer {token}"})
|
||||
assert resp.status_code == 200
|
||||
assert resp.json() == []
|
||||
|
||||
|
||||
async def test_list_images_returns_own_only(client):
|
||||
token = await _user_token(client)
|
||||
other = await _other_token(client)
|
||||
|
||||
# Upload one image as user, one as other
|
||||
for tok, name in [(token, "mine.png"), (other, "theirs.png")]:
|
||||
await client.post(
|
||||
"/images/upload",
|
||||
files={"file": (name, io.BytesIO(_png_bytes()), "image/png")},
|
||||
headers={"Authorization": f"Bearer {tok}"},
|
||||
)
|
||||
|
||||
resp = await client.get("/images/", headers={"Authorization": f"Bearer {token}"})
|
||||
assert resp.status_code == 200
|
||||
items = resp.json()
|
||||
assert len(items) == 1
|
||||
assert items[0]["filename"] == "mine.png"
|
||||
|
||||
|
||||
async def test_list_images_unauthenticated(client):
|
||||
resp = await client.get("/images/")
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# GET /images/{id}/file
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_serve_image_success(client):
|
||||
token = await _user_token(client)
|
||||
up = await client.post(
|
||||
"/images/upload",
|
||||
files={"file": ("pixel.png", io.BytesIO(_png_bytes()), "image/png")},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
image_id = up.json()["id"]
|
||||
|
||||
resp = await client.get(
|
||||
f"/images/{image_id}/file",
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
assert resp.headers["content-type"].startswith("image/png")
|
||||
assert resp.content == _png_bytes()
|
||||
|
||||
|
||||
async def test_serve_image_wrong_user(client):
|
||||
token = await _user_token(client)
|
||||
other = await _other_token(client)
|
||||
|
||||
up = await client.post(
|
||||
"/images/upload",
|
||||
files={"file": ("secret.png", io.BytesIO(_png_bytes()), "image/png")},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
image_id = up.json()["id"]
|
||||
|
||||
resp = await client.get(
|
||||
f"/images/{image_id}/file",
|
||||
headers={"Authorization": f"Bearer {other}"},
|
||||
)
|
||||
assert resp.status_code == 403
|
||||
|
||||
|
||||
async def test_serve_image_not_found(client):
|
||||
token = await _user_token(client)
|
||||
resp = await client.get(
|
||||
"/images/00000000-0000-0000-0000-000000000000/file",
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 404
|
||||
@@ -0,0 +1,366 @@
|
||||
"""Tests for model cache service and router."""
|
||||
import json
|
||||
import os
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
import pytest_asyncio
|
||||
from httpx import ASGITransport, AsyncClient
|
||||
|
||||
from app import db as db_module
|
||||
from app.main import app
|
||||
from app.services.models import (
|
||||
_extract_output_modality,
|
||||
_normalize_modality,
|
||||
_parse_modality,
|
||||
get_cached_models,
|
||||
get_model_output_modalities,
|
||||
is_cache_stale,
|
||||
refresh_models_cache,
|
||||
)
|
||||
|
||||
os.environ.setdefault("JWT_SECRET", "test-secret-key-for-testing-only")
|
||||
os.environ.setdefault("OPENROUTER_API_KEY", "test-key")
|
||||
|
||||
FAKE_MODELS_RAW = [
|
||||
{
|
||||
"id": "openai/gpt-4o",
|
||||
"name": "GPT-4o",
|
||||
"context_length": 128000,
|
||||
"pricing": {"prompt": "0.000005"},
|
||||
"architecture": {"modality": "text->text", "output_modalities": ["text"]},
|
||||
},
|
||||
{
|
||||
"id": "anthropic/claude-3-haiku",
|
||||
"name": "Claude 3 Haiku",
|
||||
"context_length": 200000,
|
||||
"pricing": {},
|
||||
"architecture": {"modality": "text+image->text", "output_modalities": ["text"]},
|
||||
},
|
||||
{
|
||||
"id": "openai/dall-e-3",
|
||||
"name": "DALL-E 3",
|
||||
"context_length": None,
|
||||
"pricing": {"image": "0.04"},
|
||||
"architecture": {"modality": "text->image", "output_modalities": ["image"]},
|
||||
},
|
||||
{
|
||||
"id": "openai/sora-2",
|
||||
"name": "Sora 2",
|
||||
"context_length": None,
|
||||
"pricing": {"video": "0.10"},
|
||||
"architecture": {"modality": "text->video", "output_modalities": ["video"]},
|
||||
},
|
||||
{
|
||||
"id": "google/gemini-2.5-flash-image",
|
||||
"name": "Gemini 2.5 Flash Image",
|
||||
"context_length": None,
|
||||
"pricing": {},
|
||||
"architecture": {"output_modalities": ["image", "text"]},
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def fresh_db():
|
||||
db_module._conn = None
|
||||
db_module.init_db(":memory:")
|
||||
yield
|
||||
db_module.close_db()
|
||||
db_module._conn = None
|
||||
|
||||
|
||||
@pytest_asyncio.fixture
|
||||
async def client(fresh_db):
|
||||
transport = ASGITransport(app=app)
|
||||
async with AsyncClient(transport=transport, base_url="http://test") as ac:
|
||||
yield ac
|
||||
|
||||
|
||||
async def _register_login(client, email, password, is_admin=False):
|
||||
"""Register + login; optionally promote to admin directly in DB."""
|
||||
await client.post("/auth/register", json={"email": email, "password": password})
|
||||
if is_admin:
|
||||
db_module.get_conn().execute(
|
||||
"UPDATE users SET role = 'admin' WHERE email = ?", [email]
|
||||
)
|
||||
resp = await client.post("/auth/login", json={"email": email, "password": password})
|
||||
return resp.json()["access_token"]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Unit tests: _parse_modality
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_parse_modality_text():
|
||||
assert _parse_modality("text->text") == "text"
|
||||
|
||||
|
||||
def test_parse_modality_multimodal_input_text_output():
|
||||
assert _parse_modality("text+image->text") == "text"
|
||||
|
||||
|
||||
def test_parse_modality_image():
|
||||
assert _parse_modality("text->image") == "image"
|
||||
|
||||
|
||||
def test_parse_modality_video():
|
||||
assert _parse_modality("text->video") == "video"
|
||||
|
||||
|
||||
def test_parse_modality_audio():
|
||||
assert _parse_modality("text->audio") == "audio"
|
||||
|
||||
|
||||
def test_parse_modality_no_arrow_fallback():
|
||||
assert _parse_modality("text") == "text"
|
||||
|
||||
|
||||
def test_normalize_embedding_alias():
|
||||
assert _normalize_modality("embedding") == "embeddings"
|
||||
|
||||
|
||||
def test_extract_output_modality_prefers_output_modalities():
|
||||
model = {
|
||||
"architecture": {
|
||||
"modality": "text->text",
|
||||
"output_modalities": ["image"],
|
||||
}
|
||||
}
|
||||
assert _extract_output_modality(model) == "image"
|
||||
|
||||
|
||||
def test_extract_output_modality_legacy_fallback():
|
||||
model = {"architecture": {"modality": "text->audio"}}
|
||||
assert _extract_output_modality(model) == "audio"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Unit tests: is_cache_stale
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_cache_stale_when_empty():
|
||||
conn = db_module.get_conn()
|
||||
assert is_cache_stale(conn) is True
|
||||
|
||||
|
||||
def test_cache_not_stale_after_fresh_insert():
|
||||
conn = db_module.get_conn()
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
conn.execute(
|
||||
"INSERT INTO models_cache (model_id, name, modality, fetched_at) VALUES (?, ?, ?, ?)",
|
||||
["openai/gpt-4o", "GPT-4o", "text", now],
|
||||
)
|
||||
assert is_cache_stale(conn) is False
|
||||
|
||||
|
||||
def test_cache_stale_after_ttl_exceeded():
|
||||
conn = db_module.get_conn()
|
||||
# Store naive UTC to match DuckDB TIMESTAMP behaviour
|
||||
old_time = datetime.now(timezone.utc).replace(
|
||||
tzinfo=None) - timedelta(hours=25)
|
||||
conn.execute(
|
||||
"INSERT INTO models_cache (model_id, name, modality, fetched_at) VALUES (?, ?, ?, ?)",
|
||||
["openai/gpt-4o", "GPT-4o", "text", old_time],
|
||||
)
|
||||
assert is_cache_stale(conn) is True
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Unit tests: refresh_models_cache + get_cached_models
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_refresh_stores_models():
|
||||
conn = db_module.get_conn()
|
||||
with patch(
|
||||
"app.services.models.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
return_value=FAKE_MODELS_RAW,
|
||||
):
|
||||
count = await refresh_models_cache(conn)
|
||||
assert count == 5
|
||||
all_models = get_cached_models(conn)
|
||||
assert len(all_models) == 5
|
||||
|
||||
|
||||
async def test_refresh_replaces_old_cache():
|
||||
conn = db_module.get_conn()
|
||||
old_time = datetime.now(timezone.utc).replace(
|
||||
tzinfo=None) - timedelta(hours=30)
|
||||
conn.execute(
|
||||
"INSERT INTO models_cache (model_id, name, modality, fetched_at) VALUES (?, ?, ?, ?)",
|
||||
["old/model", "Old Model", "text", old_time],
|
||||
)
|
||||
with patch(
|
||||
"app.services.models.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
return_value=FAKE_MODELS_RAW,
|
||||
):
|
||||
await refresh_models_cache(conn)
|
||||
ids = [m["id"] for m in get_cached_models(conn)]
|
||||
assert "old/model" not in ids
|
||||
assert "openai/gpt-4o" in ids
|
||||
assert len(ids) == 5
|
||||
|
||||
|
||||
def test_get_cached_models_filter_by_modality():
|
||||
conn = db_module.get_conn()
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
for m in FAKE_MODELS_RAW:
|
||||
modality = _extract_output_modality(m)
|
||||
conn.execute(
|
||||
"INSERT INTO models_cache (model_id, name, modality, fetched_at) VALUES (?, ?, ?, ?)",
|
||||
[m["id"], m["name"], modality, now],
|
||||
)
|
||||
text_models = get_cached_models(conn, modality="text")
|
||||
# gpt-4o, claude-3-haiku (gemini has output_modalities=["image","text"] → classified as "image")
|
||||
assert len(text_models) == 2
|
||||
assert all(m["modality"] == "text" for m in text_models)
|
||||
|
||||
image_models = get_cached_models(conn, modality="image")
|
||||
# dall-e-3 + gemini (output_modalities starts with image)
|
||||
assert len(image_models) == 2
|
||||
image_ids = [m["id"] for m in image_models]
|
||||
assert "openai/dall-e-3" in image_ids
|
||||
assert "google/gemini-2.5-flash-image" in image_ids
|
||||
|
||||
video_models = get_cached_models(conn, modality="video")
|
||||
assert len(video_models) == 1
|
||||
assert video_models[0]["id"] == "openai/sora-2"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Integration tests: GET /models/
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_list_models_endpoint_auto_refreshes(client):
|
||||
token = await _register_login(client, "user@example.com", "secret123")
|
||||
with patch(
|
||||
"app.services.models.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
return_value=FAKE_MODELS_RAW,
|
||||
) as mock_fetch:
|
||||
resp = await client.get(
|
||||
"/models/", headers={"Authorization": f"Bearer {token}"}
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
assert len(resp.json()) == 5
|
||||
assert mock_fetch.await_count >= 1
|
||||
|
||||
|
||||
async def test_list_models_endpoint_uses_cache(client):
|
||||
token = await _register_login(client, "user@example.com", "secret123")
|
||||
conn = db_module.get_conn()
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
conn.execute(
|
||||
"INSERT INTO models_cache (model_id, name, modality, fetched_at) VALUES (?, ?, ?, ?)",
|
||||
["cached/model", "Cached Model", "text", now],
|
||||
)
|
||||
with patch(
|
||||
"app.services.models.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
) as mock_fetch:
|
||||
resp = await client.get(
|
||||
"/models/?modality=text", headers={"Authorization": f"Bearer {token}"}
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
assert resp.json()[0]["id"] == "cached/model"
|
||||
mock_fetch.assert_not_awaited()
|
||||
|
||||
|
||||
async def test_list_models_endpoint_requires_auth(client):
|
||||
resp = await client.get("/models/")
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
async def test_list_models_filter_by_modality(client):
|
||||
token = await _register_login(client, "user@example.com", "secret123")
|
||||
with patch(
|
||||
"app.services.models.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
return_value=FAKE_MODELS_RAW,
|
||||
):
|
||||
resp = await client.get(
|
||||
"/models/?modality=image", headers={"Authorization": f"Bearer {token}"}
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert len(data) == 2 # dall-e-3 + gemini-2.5-flash-image
|
||||
image_ids = [m["id"] for m in data]
|
||||
assert "openai/dall-e-3" in image_ids
|
||||
assert "google/gemini-2.5-flash-image" in image_ids
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Integration tests: POST /models/refresh
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_refresh_endpoint_requires_admin(client):
|
||||
token = await _register_login(client, "user@example.com", "secret123")
|
||||
resp = await client.post(
|
||||
"/models/refresh", headers={"Authorization": f"Bearer {token}"}
|
||||
)
|
||||
assert resp.status_code == 403
|
||||
|
||||
|
||||
async def test_refresh_endpoint_admin_succeeds(client):
|
||||
token = await _register_login(client, "admin@example.com", "secret123", is_admin=True)
|
||||
with patch(
|
||||
"app.services.models.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
return_value=FAKE_MODELS_RAW,
|
||||
):
|
||||
resp = await client.post(
|
||||
"/models/refresh", headers={"Authorization": f"Bearer {token}"}
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
assert resp.json()["refreshed"] == 5
|
||||
|
||||
|
||||
async def test_refresh_endpoint_502_on_openrouter_error(client):
|
||||
token = await _register_login(client, "admin@example.com", "secret123", is_admin=True)
|
||||
with patch(
|
||||
"app.services.models.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
side_effect=RuntimeError("network error"),
|
||||
):
|
||||
resp = await client.post(
|
||||
"/models/refresh", headers={"Authorization": f"Bearer {token}"}
|
||||
)
|
||||
assert resp.status_code == 502
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Unit tests: get_model_output_modalities
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_get_model_output_modalities_image_only():
|
||||
conn = db_module.get_conn()
|
||||
with patch(
|
||||
"app.services.models.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
return_value=FAKE_MODELS_RAW,
|
||||
):
|
||||
await refresh_models_cache(conn)
|
||||
modalities = get_model_output_modalities(conn, "openai/dall-e-3")
|
||||
assert modalities == ["image"]
|
||||
|
||||
|
||||
async def test_get_model_output_modalities_image_text():
|
||||
conn = db_module.get_conn()
|
||||
with patch(
|
||||
"app.services.models.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
return_value=FAKE_MODELS_RAW,
|
||||
):
|
||||
await refresh_models_cache(conn)
|
||||
modalities = get_model_output_modalities(
|
||||
conn, "google/gemini-2.5-flash-image")
|
||||
assert set(modalities) == {"image", "text"}
|
||||
|
||||
|
||||
def test_get_model_output_modalities_unknown_model():
|
||||
conn = db_module.get_conn()
|
||||
result = get_model_output_modalities(conn, "unknown/model")
|
||||
assert result == []
|
||||
@@ -0,0 +1,208 @@
|
||||
"""Integration tests for user management endpoints."""
|
||||
import os
|
||||
import pytest
|
||||
import pytest_asyncio
|
||||
from httpx import AsyncClient, ASGITransport
|
||||
|
||||
from app.main import app
|
||||
from app import db as db_module
|
||||
|
||||
os.environ.setdefault("JWT_SECRET", "test-secret-key-for-testing-only")
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def fresh_db():
|
||||
db_module._conn = None
|
||||
db_module.init_db(":memory:")
|
||||
yield
|
||||
db_module.close_db()
|
||||
db_module._conn = None
|
||||
|
||||
|
||||
@pytest_asyncio.fixture
|
||||
async def client(fresh_db):
|
||||
transport = ASGITransport(app=app)
|
||||
async with AsyncClient(transport=transport, base_url="http://test") as ac:
|
||||
yield ac
|
||||
|
||||
|
||||
async def _register_and_login(client, email="user@example.com", password="secret123"):
|
||||
await client.post("/auth/register", json={"email": email, "password": password})
|
||||
resp = await client.post("/auth/login", json={"email": email, "password": password})
|
||||
return resp.json()["access_token"]
|
||||
|
||||
|
||||
async def _make_admin(user_id: str):
|
||||
"""Directly set a user's role to admin in the DB."""
|
||||
conn = db_module.get_conn()
|
||||
conn.execute("UPDATE users SET role = 'admin' WHERE id = ?", [user_id])
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# GET /users/me
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_get_me(client):
|
||||
token = await _register_and_login(client)
|
||||
resp = await client.get("/users/me", headers={"Authorization": f"Bearer {token}"})
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert data["email"] == "user@example.com"
|
||||
assert data["role"] == "user"
|
||||
assert "id" in data
|
||||
|
||||
|
||||
async def test_get_me_unauthenticated(client):
|
||||
resp = await client.get("/users/me")
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# PUT /users/me
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_update_me_email(client):
|
||||
token = await _register_and_login(client)
|
||||
resp = await client.put(
|
||||
"/users/me",
|
||||
json={"email": "new@example.com"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
assert resp.json()["email"] == "new@example.com"
|
||||
|
||||
|
||||
async def test_update_me_password(client):
|
||||
token = await _register_and_login(client)
|
||||
resp = await client.put(
|
||||
"/users/me",
|
||||
json={"password": "newpassword123"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
# Verify new password works for login
|
||||
login = await client.post(
|
||||
"/auth/login", json={"email": "user@example.com", "password": "newpassword123"}
|
||||
)
|
||||
assert login.status_code == 200
|
||||
|
||||
|
||||
async def test_update_me_duplicate_email(client):
|
||||
await _register_and_login(client, "other@example.com")
|
||||
token = await _register_and_login(client, "user@example.com")
|
||||
resp = await client.put(
|
||||
"/users/me",
|
||||
json={"email": "other@example.com"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 409
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# GET /users (admin only)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_list_users_as_admin(client):
|
||||
token = await _register_and_login(client)
|
||||
me = await client.get("/users/me", headers={"Authorization": f"Bearer {token}"})
|
||||
await _make_admin(me.json()["id"])
|
||||
|
||||
# Re-login to get a token with admin role
|
||||
login = await client.post(
|
||||
"/auth/login", json={"email": "user@example.com", "password": "secret123"}
|
||||
)
|
||||
admin_token = login.json()["access_token"]
|
||||
resp = await client.get("/users", headers={"Authorization": f"Bearer {admin_token}"})
|
||||
assert resp.status_code == 200
|
||||
assert isinstance(resp.json(), list)
|
||||
assert len(resp.json()) >= 1
|
||||
emails = [u["email"] for u in resp.json()]
|
||||
assert "user@example.com" in emails
|
||||
|
||||
|
||||
async def test_list_users_as_regular_user(client):
|
||||
token = await _register_and_login(client)
|
||||
resp = await client.get("/users", headers={"Authorization": f"Bearer {token}"})
|
||||
assert resp.status_code == 403
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# DELETE /users/{id} (admin only)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_delete_user_as_admin(client):
|
||||
# Register target user
|
||||
await client.post("/auth/register", json={"email": "target@example.com", "password": "secret123"})
|
||||
target_resp = await client.post("/auth/login", json={"email": "target@example.com", "password": "secret123"})
|
||||
target_token = target_resp.json()["access_token"]
|
||||
target_me = await client.get("/users/me", headers={"Authorization": f"Bearer {target_token}"})
|
||||
target_id = target_me.json()["id"]
|
||||
|
||||
# Register admin
|
||||
token = await _register_and_login(client)
|
||||
me = await client.get("/users/me", headers={"Authorization": f"Bearer {token}"})
|
||||
await _make_admin(me.json()["id"])
|
||||
login = await client.post("/auth/login", json={"email": "user@example.com", "password": "secret123"})
|
||||
admin_token = login.json()["access_token"]
|
||||
|
||||
resp = await client.delete(f"/users/{target_id}", headers={"Authorization": f"Bearer {admin_token}"})
|
||||
assert resp.status_code == 204
|
||||
|
||||
|
||||
async def test_delete_own_account_forbidden(client):
|
||||
token = await _register_and_login(client)
|
||||
me = await client.get("/users/me", headers={"Authorization": f"Bearer {token}"})
|
||||
user_id = me.json()["id"]
|
||||
await _make_admin(user_id)
|
||||
login = await client.post("/auth/login", json={"email": "user@example.com", "password": "secret123"})
|
||||
admin_token = login.json()["access_token"]
|
||||
|
||||
resp = await client.delete(f"/users/{user_id}", headers={"Authorization": f"Bearer {admin_token}"})
|
||||
assert resp.status_code == 400
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# PUT /users/{id}/role (admin only)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_set_role_as_admin(client):
|
||||
# Register target
|
||||
await client.post("/auth/register", json={"email": "target@example.com", "password": "secret123"})
|
||||
target_resp = await client.post("/auth/login", json={"email": "target@example.com", "password": "secret123"})
|
||||
target_me = await client.get("/users/me", headers={"Authorization": f"Bearer {target_resp.json()['access_token']}"})
|
||||
target_id = target_me.json()["id"]
|
||||
|
||||
# Register admin
|
||||
token = await _register_and_login(client)
|
||||
me = await client.get("/users/me", headers={"Authorization": f"Bearer {token}"})
|
||||
await _make_admin(me.json()["id"])
|
||||
login = await client.post("/auth/login", json={"email": "user@example.com", "password": "secret123"})
|
||||
admin_token = login.json()["access_token"]
|
||||
|
||||
resp = await client.put(
|
||||
f"/users/{target_id}/role",
|
||||
json={"role": "admin"},
|
||||
headers={"Authorization": f"Bearer {admin_token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
assert resp.json()["role"] == "admin"
|
||||
|
||||
|
||||
async def test_set_invalid_role(client):
|
||||
await client.post("/auth/register", json={"email": "target@example.com", "password": "secret123"})
|
||||
target_resp = await client.post("/auth/login", json={"email": "target@example.com", "password": "secret123"})
|
||||
target_me = await client.get("/users/me", headers={"Authorization": f"Bearer {target_resp.json()['access_token']}"})
|
||||
target_id = target_me.json()["id"]
|
||||
|
||||
token = await _register_and_login(client)
|
||||
me = await client.get("/users/me", headers={"Authorization": f"Bearer {token}"})
|
||||
await _make_admin(me.json()["id"])
|
||||
login = await client.post("/auth/login", json={"email": "user@example.com", "password": "secret123"})
|
||||
admin_token = login.json()["access_token"]
|
||||
|
||||
resp = await client.put(
|
||||
f"/users/{target_id}/role",
|
||||
json={"role": "superuser"},
|
||||
headers={"Authorization": f"Bearer {admin_token}"},
|
||||
)
|
||||
assert resp.status_code == 422
|
||||
@@ -0,0 +1,34 @@
|
||||
# 1. Introduction & Goals
|
||||
|
||||
Describes the relevant requirements and the driving forces that software architects and the development team must consider. These include underlying business goals, essential features, essential functional requirements, quality goals for the architecture, and relevant stakeholders and their expectations.
|
||||
|
||||
## Requirements Overview
|
||||
|
||||
**Project name**: All You Can GET AI
|
||||
**URL**: [https://ai.allucanget.biz](https://ai.allucanget.biz)
|
||||
**Purpose**: Provide AI‑powered text, image, and video generation services via a web application.
|
||||
|
||||
Users can choose between different AI models for:
|
||||
|
||||
- Text generation
|
||||
- Text‑to‑image generation
|
||||
- Text‑to‑video generation
|
||||
- Image‑to‑video generation
|
||||
|
||||
Users can create accounts, log in, and view their generation history in a gallery. An admin dashboard allows managing users, models, and video generation jobs.
|
||||
|
||||
## Quality Goals
|
||||
|
||||
| Priority | Quality Goal | Scenario |
|
||||
| -------- | ----------------- | --------------------------------------------- |
|
||||
| 1 | High availability | Service is accessible 99.9% of the time |
|
||||
| 2 | Low latency | Generation endpoints respond within 200 ms |
|
||||
| 3 | Data privacy | User data is encrypted at rest and in transit |
|
||||
|
||||
## Stakeholders
|
||||
|
||||
| Role/Name | Contact | Expectations |
|
||||
| ------------- | ---------- | --------------------------------------------- |
|
||||
| Developer | Team | Clean APIs, testable code, good documentation |
|
||||
| End User | Public | Fast, reliable AI generation features |
|
||||
| Product Owner | Management | Feature completeness, uptime, cost control |
|
||||
@@ -0,0 +1,21 @@
|
||||
# 10. Quality Requirements
|
||||
|
||||
This section contains all relevant quality requirements. The most important ones are already described in section 1.2 (Quality Goals), and should only be referenced here.
|
||||
|
||||
## Quality Requirements Overview
|
||||
|
||||
| Category | Quality Requirement |
|
||||
| --------------- | ----------------------------------------------------------------------- |
|
||||
| Availability | Service available ≥ 99.9% of the time |
|
||||
| Performance | Generation endpoints respond within 200 ms (excluding AI model latency) |
|
||||
| Security | All user data encrypted at rest and in transit; JWT-protected endpoints |
|
||||
| Maintainability | Test coverage > 80% on `app/`; CI pipeline enforced |
|
||||
| Portability | Runs on Windows and Linux without modification |
|
||||
|
||||
## Quality Scenarios
|
||||
|
||||
| Scenario ID | Source | Stimulus | Environment | Artifact | Response | Response Measure |
|
||||
| ----------- | --------- | ------------------------------- | ----------- | ------------ | ------------------------------- | ----------------------------------------- |
|
||||
| QS-01 | End user | Submits text generation request | Production | AI Service | Returns generated text | < 200 ms (excluding model latency) |
|
||||
| QS-02 | Attacker | Sends request without valid JWT | Production | Auth Service | Returns 401 Unauthorized | 100% of unauthenticated requests rejected |
|
||||
| QS-03 | Developer | Adds new API endpoint | Development | FastAPI app | Tests pass, coverage maintained | CI pipeline green |
|
||||
@@ -0,0 +1,12 @@
|
||||
# 11. Risks and Technical Debt
|
||||
|
||||
A list of identified technical risks or technical debts, ordered by priority.
|
||||
|
||||
> "Risk management is project management for grown-ups." — Tim Lister, Atlantic Systems Guild.
|
||||
|
||||
| Priority | Risk / Technical Debt | Probability | Impact | Suggested Measures |
|
||||
| -------- | ---------------------------------------------- | ----------- | ------ | ------------------------------------------------------------------------ |
|
||||
| 1 | Dependency on openrouter.ai availability | medium | high | Add fallback models; implement retry logic with exponential backoff |
|
||||
| 2 | DuckDB schema changes break existing data | low | high | Version migrations; backup strategy before upgrades |
|
||||
| 3 | Single-process DuckDB limits write concurrency | low | medium | Monitor load; consider migration to PostgreSQL if needed |
|
||||
| 4 | JWT secret leak | low | high | Rotate secrets via environment variables; never commit to source control |
|
||||
@@ -0,0 +1,15 @@
|
||||
# 12. Glossary
|
||||
|
||||
The most important domain and technical terms that stakeholders use when discussing the system.
|
||||
|
||||
| Term | Definition |
|
||||
| ------------- | --------------------------------------------------------------------------------------- |
|
||||
| AI | Artificial Intelligence — machine learning models that generate text, images, or video |
|
||||
| API | Application Programming Interface — HTTP endpoints exposed by the FastAPI backend |
|
||||
| JWT | JSON Web Token — signed token used for stateless authentication |
|
||||
| DuckDB | Embedded analytical database used for persistent storage |
|
||||
| FastAPI | Python async web framework used for the backend |
|
||||
| Flask | Python web framework used for the frontend |
|
||||
| openrouter.ai | Third-party API aggregator providing access to multiple AI model providers |
|
||||
| ADR | Architecture Decision Record — a document capturing an important architectural decision |
|
||||
| HTTPS | Hypertext Transfer Protocol Secure — encrypted HTTP communication |
|
||||
@@ -0,0 +1,26 @@
|
||||
# 2. Architecture Constraints
|
||||
|
||||
Any requirement that constrains software architects in their freedom of design and implementation decisions or decision about the development process. These constraints sometimes go beyond individual systems and are valid for whole organizations and companies.
|
||||
|
||||
## Technical Constraints
|
||||
|
||||
| Constraint | Background / Motivation |
|
||||
| ----------------------------- | ------------------------------------------- |
|
||||
| Must run on Windows and Linux | Cross-platform developer environments |
|
||||
| FastAPI for backend | Async performance, OpenAPI support |
|
||||
| Flask for frontend | Lightweight UI serving |
|
||||
| DuckDB as embedded database | Lightweight, no separate DB server required |
|
||||
| External AI via openrouter.ai | Unified access to multiple AI models |
|
||||
|
||||
## Organizational Constraints
|
||||
|
||||
| Constraint | Background / Motivation |
|
||||
| ----------------- | --------------------------------- |
|
||||
| Open source stack | Cost reduction, community support |
|
||||
|
||||
## Conventions
|
||||
|
||||
| Convention | Background / Motivation |
|
||||
| -------------------- | --------------------------------------------------- |
|
||||
| Python 3.12+ | Modern language features, type hints |
|
||||
| pytest for all tests | Consistent test tooling across backend and frontend |
|
||||
@@ -0,0 +1,26 @@
|
||||
# 3. Context and Scope
|
||||
|
||||
Delimits your system (i.e. your scope) from all its communication partners (neighboring systems and users, i.e. the context of your system). It thereby specifies the external interfaces.
|
||||
|
||||
## Business Context
|
||||
|
||||
Specification of **all** communication partners (users, IT-systems, …) with explanations of domain specific inputs and outputs or interfaces.
|
||||
|
||||
The system exposes REST APIs for authentication, user management, and AI generation. The Flask frontend consumes these APIs and serves pages at `http://ai.allucanget.biz`.
|
||||
|
||||
| Communication Partner | Input | Output |
|
||||
| --------------------- | ------------------------------ | ------------------------------ |
|
||||
| End User (browser) | Form submissions, API requests | Generated text, images, videos |
|
||||
| openrouter.ai API | Prompt + model selection | AI-generated content |
|
||||
| DuckDB | SQL queries | User data, session data |
|
||||
|
||||
## Technical Context
|
||||
|
||||
| Channel | Protocol | Description |
|
||||
| -------------------------------- | ---------- | ------------------ |
|
||||
| User ↔ Flask frontend | HTTPS | Browser-based UI |
|
||||
| Flask frontend ↔ FastAPI backend | HTTP REST | Internal API calls |
|
||||
| FastAPI backend ↔ openrouter.ai | HTTPS REST | AI model inference |
|
||||
| FastAPI backend ↔ DuckDB | In-process | Embedded DB access |
|
||||
|
||||
**Mapping Input/Output to Channels:** All user-facing traffic flows through HTTPS. The backend communicates with openrouter.ai over HTTPS. DuckDB is accessed in-process with no network channel.
|
||||
@@ -0,0 +1,23 @@
|
||||
# 4. Solution Strategy
|
||||
|
||||
A short summary and explanation of the fundamental decisions and solution strategies that shape system architecture. It includes:
|
||||
|
||||
- Technology decisions
|
||||
- Decisions about the top-level decomposition of the system, e.g. usage of an architectural pattern or design pattern
|
||||
- Decisions on how to achieve key quality goals
|
||||
- Relevant organizational decisions, e.g. selecting a development process or delegating certain tasks to third parties
|
||||
|
||||
Micro‑service style: a single FastAPI process handles all API endpoints; a separate Flask process serves the UI.
|
||||
|
||||
| Quality Goal | Scenario | Solution Approach |
|
||||
| ----------------- | ------------------------ | --------------------------------------------------------- |
|
||||
| High availability | Service must be up 99.9% | Docker containers with health checks and restart policies |
|
||||
| Low latency | Responses < 200 ms | FastAPI async endpoints; DuckDB in-process |
|
||||
| Data privacy | Encrypted data | JWT auth, HTTPS everywhere, DuckDB file encryption |
|
||||
|
||||
**Key technology decisions:**
|
||||
|
||||
- **FastAPI** for async performance and automatic OpenAPI docs
|
||||
- **DuckDB** for a lightweight embedded DB with no separate server
|
||||
- **Flask** for a thin frontend layer with Jinja2 templates
|
||||
- **openrouter.ai** for unified access to multiple AI model providers
|
||||
@@ -0,0 +1,110 @@
|
||||
# 5. Building Block View
|
||||
|
||||
Static decomposition of the system into building blocks (modules, components, subsystems, classes, interfaces, packages, libraries, frameworks, layers, partitions, tiers, functions, macros, operations, data structures, …) as well as their dependencies (relationships, associations, …).
|
||||
|
||||
## Level 1 – Whitebox Overall System
|
||||
|
||||
```text
|
||||
┌────────────────────────┐
|
||||
│ Frontend (Flask) │
|
||||
└───────┬────────────────┘
|
||||
│ REST API calls
|
||||
┌───────▼────────────────┐
|
||||
│ FastAPI Backend │
|
||||
│ ├─ Auth Service │
|
||||
│ ├─ User Service │
|
||||
│ ├─ AI Service │
|
||||
│ └─ DB Service (DuckDB)│
|
||||
└───────┬────────────────┘
|
||||
│ DB access
|
||||
┌───────▼────────────────┐
|
||||
│ DuckDB Database │
|
||||
└────────────────────────┘
|
||||
```
|
||||
|
||||
**Motivation:** Separating the UI (Flask) from the API (FastAPI) allows independent scaling and testing of each layer.
|
||||
|
||||
**Contained Building Blocks:**
|
||||
|
||||
| Name | Responsibility |
|
||||
| ---------------- | -------------------------------------------------- |
|
||||
| Frontend (Flask) | Serves HTML/CSS/JS UI, proxies requests to FastAPI |
|
||||
| FastAPI Backend | Handles auth, user management, AI generation API |
|
||||
| Auth Service | JWT issuance and validation |
|
||||
| User Service | CRUD operations for user accounts |
|
||||
| AI Service | Sends prompts to openrouter.ai, returns results |
|
||||
| DB Service | Reads/writes to DuckDB |
|
||||
| DuckDB Database | Persistent storage for users and session data |
|
||||
|
||||
## Level 2 – FastAPI Backend internals
|
||||
|
||||
### White Box Auth Service (`/auth`)
|
||||
|
||||
Handles JWT issuance, validation, and refresh token lifecycle.
|
||||
|
||||
| Method | Path | Auth required | Description |
|
||||
| ------ | ---------------- | :-----------: | ------------------------------------------------ |
|
||||
| POST | `/auth/register` | — | Create a new user account |
|
||||
| POST | `/auth/login` | — | Authenticate and receive access + refresh tokens |
|
||||
| POST | `/auth/refresh` | — | Rotate refresh token, get a new access token |
|
||||
| POST | `/auth/logout` | — | Revoke refresh token |
|
||||
|
||||
### White Box User Service (`/users`)
|
||||
|
||||
Self-service profile management and admin user CRUD.
|
||||
|
||||
| Method | Path | Auth required | Admin only | Description |
|
||||
| ------ | ------------------ | ------------- | ---------- | ---------------------------------------- |
|
||||
| GET | `/users/me` | ✓ | — | Get current user profile |
|
||||
| PUT | `/users/me` | ✓ | — | Update own email or password |
|
||||
| GET | `/users` | ✓ | ✓ | List all users |
|
||||
| DELETE | `/users/{id}` | ✓ | ✓ | Delete a user (cannot self-delete) |
|
||||
| PUT | `/users/{id}/role` | ✓ | ✓ | Change a user's role (`user` \| `admin`) |
|
||||
|
||||
### White Box Admin Service (`/admin`)
|
||||
|
||||
Operational endpoints for application management.
|
||||
|
||||
| Method | Path | Auth required | Admin only | Description |
|
||||
| ------ | --------------------------- | ------------- | ---------- | ------------------------------------------ |
|
||||
| GET | `/admin/stats` | ✓ | ✓ | User counts by role, token activity |
|
||||
| GET | `/admin/health/db` | ✓ | ✓ | DuckDB connectivity check |
|
||||
| POST | `/admin/tokens/purge` | ✓ | ✓ | Remove expired/revoked refresh tokens |
|
||||
| GET | `/admin/videos` | ✓ | ✓ | List all video jobs with user emails |
|
||||
| POST | `/admin/videos/{id}/cancel` | ✓ | ✓ | Cancel a queued/processing video job |
|
||||
| POST | `/admin/videos/{id}/retry` | ✓ | ✓ | Retry a failed/cancelled video job |
|
||||
| DELETE | `/admin/videos/{id}` | ✓ | ✓ | Permanently delete a video job |
|
||||
| POST | `/admin/videos/purge` | ✓ | ✓ | Delete old completed/failed/cancelled jobs |
|
||||
| POST | `/admin/videos/timed-out` | ✓ | ✓ | Mark stale processing jobs as failed |
|
||||
| GET | `/admin/models` | ✓ | ✓ | List cached OpenRouter models |
|
||||
| POST | `/admin/models/refresh` | ✓ | ✓ | Refresh model cache from OpenRouter |
|
||||
|
||||
### White Box AI Service (`/ai`, `/generate`)
|
||||
|
||||
Model listing and multi-modal generation via openrouter.ai.
|
||||
|
||||
| Method | Path | Auth required | Description |
|
||||
| ------ | ------------------------------ | ------------- | ------------------------------------------------------------------------------------------------------------------- |
|
||||
| GET | `/ai/models` | ✓ | List available OpenRouter models |
|
||||
| POST | `/ai/chat` | ✓ | Multi-turn chat completion |
|
||||
| POST | `/generate/text` | ✓ | Single-prompt text generation (optional system prompt) |
|
||||
| POST | `/generate/image` | ✓ | Text-to-image (DALL-E via `/images/generations` or FLUX/GPT-5 Image Mini via `/chat/completions` with `modalities`) |
|
||||
| POST | `/generate/video` | ✓ | Text-to-video (Sora 2 Pro, Veo 3.1 Fast) — returns `polling_url` |
|
||||
| POST | `/generate/video/from-image` | ✓ | Image-to-video — returns `polling_url` |
|
||||
| GET | `/generate/video/status` | ✓ | Poll video generation status via `polling_url` |
|
||||
| GET | `/generate/images` | ✓ | List current user's generated images |
|
||||
| GET | `/generate/images/{id}` | ✓ | Get a single generated image |
|
||||
| GET | `/generate/videos` | ✓ | List current user's video jobs |
|
||||
| GET | `/generate/videos/{id}` | ✓ | Get a single video job |
|
||||
| POST | `/generate/videos/{id}/cancel` | ✓ | Cancel a queued/processing video job |
|
||||
|
||||
**Video generation flow:** The `/generate/video` and `/generate/video/from-image` endpoints queue a job in the local database and return immediately with `status: "queued"`. A background worker (`video_worker.py`) submits the job to OpenRouter's `/api/v1/videos` endpoint, receives a `polling_url`, and polls it periodically until the job reaches `"completed"` or `"failed"`. The frontend polls `GET /generate/video/{id}/status` every 5 seconds to show live status updates.
|
||||
|
||||
**Image generation routing:** The router auto-detects the model type — models containing `"flux"` or `"gpt-5-image-mini"` are routed to `/chat/completions` with `modalities: ["image"]` (or `["image", "text"]` depending on cached output modalities), while others (e.g. DALL-E 3) use the legacy `/images/generations` endpoint.
|
||||
|
||||
### White Box DB Service (`db.py`)
|
||||
|
||||
- Singleton DuckDB connection, opened on FastAPI startup via lifespan hook.
|
||||
- `asyncio.Lock` serialises all write operations within the single process.
|
||||
- Schema migration runs automatically on first startup.
|
||||
- Tables: `users`, `refresh_tokens` (see Section 8 for schema details).
|
||||
@@ -0,0 +1,98 @@
|
||||
# 6. Runtime View
|
||||
|
||||
Describes concrete behavior and interactions of the system's building blocks in form of scenarios from the following areas:
|
||||
|
||||
- Important use cases or features: how do building blocks execute them?
|
||||
- Interactions at critical external interfaces: how do building blocks cooperate with users and neighboring systems?
|
||||
- Operation and administration: launch, start-up, stop
|
||||
- Error and exception scenarios
|
||||
|
||||
## Scenario 1: User Authentication
|
||||
|
||||
1. User submits login form in Flask frontend
|
||||
2. Flask POSTs credentials to `POST /auth/login`
|
||||
3. Auth Service validates credentials against DuckDB
|
||||
4. Auth Service returns JWT token
|
||||
5. Flask stores token in session cookie
|
||||
6. User is redirected to dashboard
|
||||
|
||||
## Scenario 2: AI Text Generation
|
||||
|
||||
1. User fills in text generation form in Flask frontend
|
||||
2. Flask POSTs prompt + model to `POST /generate/text` with JWT header
|
||||
3. Auth Service validates JWT
|
||||
4. AI Service sends prompt to openrouter.ai
|
||||
5. openrouter.ai returns generated text
|
||||
6. FastAPI returns result to Flask
|
||||
7. Flask renders result page for user
|
||||
|
||||
## Scenario 3: Image Generation
|
||||
|
||||
1. User submits image generation form with prompt, model, size, aspect ratio, and resolution
|
||||
2. Flask POSTs to `POST /generate/image` with JWT header
|
||||
3. Router auto-detects model type:
|
||||
- **FLUX / GPT-5 Image Mini**: calls `/chat/completions` with `modalities: ["image"]` and `image_config`
|
||||
- **DALL-E 3**: calls `/images/generations` with `size` and `n`
|
||||
4. Image URL (base64 data URL or hosted URL) returned to Flask
|
||||
5. Flask renders page with generated image(s)
|
||||
|
||||
## Scenario 3a: Image Generation with Aspect Ratio & Resolution
|
||||
|
||||
1. User selects aspect ratio (e.g. `16:9`) and resolution (`2K`) on the image generation form
|
||||
2. Flask POSTs `aspect_ratio` and `image_size` to `POST /generate/image`
|
||||
3. Backend passes these as `image_config` to the chat completions endpoint (for FLUX/GPT-5 Image Mini)
|
||||
4. Generated image respects the requested aspect ratio and resolution
|
||||
|
||||
## Scenario 4: Video Generation (Text-to-Video)
|
||||
|
||||
1. User submits video generation form with prompt, model, aspect ratio, resolution, and duration
|
||||
2. Flask POSTs to `POST /generate/video` with JWT header
|
||||
3. Auth Service validates JWT
|
||||
4. Backend inserts a row into `generated_videos` with `status: "queued"` and returns the DB job ID
|
||||
5. Flask renders result page with polling UI
|
||||
6. Background worker (`video_worker.py`) picks up queued jobs every 15 seconds:
|
||||
- Calls OpenRouter `POST /api/v1/videos` with model, prompt, and parameters
|
||||
- Receives `{"id": "...", "polling_url": "..."}` and updates the DB row to `status: "processing"`
|
||||
- Polls the `polling_url` every 15 seconds until `status` is `"completed"` or `"failed"`
|
||||
- Updates the DB row with the final status and video URL
|
||||
7. Frontend JavaScript polls `GET /generate/video/{db_id}/status` every 5 seconds
|
||||
8. When `status` becomes `"completed"`, the response includes `video_url` — the video is displayed in a `<video>` element
|
||||
9. If `status` becomes `"failed"`, an error message is shown
|
||||
10. User can click "Cancel Job" to mark the job as `"cancelled"` (stops local polling, does not stop the provider job)
|
||||
|
||||
## Scenario 4a: Video Generation (Image-to-Video)
|
||||
|
||||
1. User provides an image URL, motion prompt, model, aspect ratio, resolution, and duration
|
||||
2. Flask POSTs to `POST /generate/video/from-image` with JWT header
|
||||
3. Same background worker flow as Scenario 4, with `generation_type: "image_to_video"`
|
||||
|
||||
## Scenario 4b: Video Job Cancellation
|
||||
|
||||
1. User clicks "Cancel Job" on the video detail page or gallery pending card
|
||||
2. Frontend POSTs to `/generate/video/{id}/cancel`
|
||||
3. Backend verifies the job belongs to the user and is not in a terminal state
|
||||
4. Backend updates the DB row `status` to `"cancelled"`
|
||||
5. Frontend stops polling and updates the UI to show "Job cancelled"
|
||||
|
||||
## Scenario 5: Token Refresh
|
||||
|
||||
1. Access token expires (TTL 15 min)
|
||||
2. Client POSTs current refresh token to `POST /auth/refresh`
|
||||
3. Auth Service validates JTI against `refresh_tokens` table (not revoked, not expired)
|
||||
4. Old JTI is revoked; new JTI inserted into `refresh_tokens`
|
||||
5. New access token + new refresh token returned to client
|
||||
|
||||
## Scenario 6: Admin User Management
|
||||
|
||||
1. Admin logs in and receives access token with `role: admin`
|
||||
2. Admin GETs `/admin/stats` to view user and token counts
|
||||
3. Admin DELETEs `/users/{id}` to remove a user — refresh tokens for that user are cascade-deleted
|
||||
4. Admin PUTs `/users/{id}/role` to promote a user to admin or demote to user
|
||||
|
||||
## Scenario 7: User Profile Update
|
||||
|
||||
1. Authenticated user navigates to `/users/profile`
|
||||
2. User submits updated email and/or new password
|
||||
3. Flask POSTs to `PUT /users/me` with JWT header
|
||||
4. Auth Service validates credentials and updates user record in DuckDB
|
||||
5. Session `user_email` is updated; user sees success message
|
||||
@@ -0,0 +1,83 @@
|
||||
# 7. Deployment View
|
||||
|
||||
Describes:
|
||||
|
||||
1. Technical infrastructure used to execute your system, with infrastructure elements like geographical locations, environments, computers, processors, channels and net topologies.
|
||||
2. Mapping of (software) building blocks to that infrastructure elements.
|
||||
|
||||
**See**: [Coolify Deployment Guide](./deployment/coolify.md) for detailed instructions.
|
||||
|
||||
## Infrastructure Level 1
|
||||
|
||||
Hosted on a single VM running docker containers, deployed via Coolify with Nixpacks to 192.168.88.18 for production.
|
||||
|
||||
Containers run behind nginx at 192.168.88.11 which handles TLS termination and reverse proxying to the frontend on port 12016 and backend on port 12015. The database is a file on the host filesystem at `data/app.db` accessed by the backend service.
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
Users[Users / Internet]
|
||||
Nginx[nginx reverse proxy\nTLS termination]
|
||||
Users -->|HTTPS| Nginx
|
||||
|
||||
subgraph Coolify Server
|
||||
direction TB
|
||||
subgraph AI Frontend
|
||||
AI_Frontend[AI Frontend\nFlask\nServes HTML/CSS/JS UI]
|
||||
end
|
||||
subgraph AI Backend
|
||||
AI_Backend[AI Backend\nFastAPI\nCommunicates with openrouter.ai API]
|
||||
db[(DuckDB Database\nFile: data/app.db)]
|
||||
AI_Backend --> db
|
||||
end
|
||||
AI_Frontend -->|BACKEND_URL:12015| AI_Backend
|
||||
end
|
||||
Nginx -->|12016| AI_Frontend
|
||||
```
|
||||
|
||||
**Motivation:** All three components run as Docker containers for simplicity and low operational overhead.
|
||||
|
||||
**Quality and/or Performance Features:** The frontend and backend are stateless; DuckDB persists data on the host filesystem.
|
||||
|
||||
**Mapping of Building Blocks to Infrastructure:**
|
||||
|
||||
| Building Block | Container / Process | Port |
|
||||
| --------------- | ---------------------------- | --------------- |
|
||||
| Nginx | `nginx` | 80/443 (public) |
|
||||
| Coolify Server | `coolify` | — |
|
||||
| Flask frontend | `frontend` | 12016 |
|
||||
| FastAPI backend | `backend` | 12015 |
|
||||
| DuckDB | File on host (`data/app.db`) | — |
|
||||
|
||||
## Infrastructure Level 2
|
||||
|
||||
### Coolify with Nixpacks (Production)
|
||||
|
||||
Both services are deployed as separate Nixpacks resources in Coolify, which results in two separate containers running on the same host. The database is a file on the host filesystem, mounted as a volume in the backend container.
|
||||
|
||||
#### Frontend
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph Coolify Server
|
||||
direction TB
|
||||
subgraph AI Frontend
|
||||
AI_Frontend[AI Frontend\nNixpacks\nBase Dir: /frontend]
|
||||
end
|
||||
end
|
||||
Users[Users / Internet] -->|HTTPS| AI_Frontend
|
||||
```
|
||||
|
||||
#### Backend
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph Coolify Server
|
||||
direction TB
|
||||
subgraph AI Backend
|
||||
AI_Backend[AI Backend\nNixpacks\nBase Dir: /backend]
|
||||
db[(DuckDB Database\nVolume: /app/data)]
|
||||
AI_Backend --> db
|
||||
end
|
||||
end
|
||||
Frontend[Frontend Container] -->|BACKEND_URL:12015| AI_Backend
|
||||
```
|
||||
@@ -0,0 +1,35 @@
|
||||
# 8. Cross-cutting Concepts
|
||||
|
||||
Describes crosscutting concepts (practices, patterns, regulations or solution ideas). Such concepts are often related to multiple building blocks. They may include many different topics such as domain models, architecture patterns, rules for using specific technology, security, logging, and error handling.
|
||||
|
||||
> Pick **only** the most-needed topics for your system.
|
||||
|
||||
## OpenRouter API Integration
|
||||
|
||||
see [docs/8.1-openrouter.md](./8.1-openrouter.md) for details on how the backend integrates with OpenRouter for multi-modal AI generation, including image and video generation flows.
|
||||
|
||||
## DuckDB Concurrency and Storage
|
||||
|
||||
See [docs/8.2-duckdb.md](./8.2-duckdb.md) for details on how the backend handles concurrent access to DuckDB and manages the database file on the host filesystem.
|
||||
|
||||
## Security
|
||||
|
||||
- All API endpoints (except `/auth/login`) require a valid JWT in the `Authorization: Bearer` header.
|
||||
- HTTPS enforced in production via reverse proxy (nginx or Caddy).
|
||||
- Passwords stored as bcrypt hashes.
|
||||
|
||||
## Logging
|
||||
|
||||
- Structured JSON logs from FastAPI via Python `logging` + `structlog`.
|
||||
- OpenTelemetry traces exported for observability.
|
||||
- Log level configurable via environment variable `LOG_LEVEL`.
|
||||
|
||||
## Error Handling
|
||||
|
||||
- All API errors return a unified JSON shape: `{ "error": "<code>", "message": "<description>" }`.
|
||||
- HTTP status codes follow REST conventions (400, 401, 403, 404, 422, 500).
|
||||
|
||||
## Configuration
|
||||
|
||||
- All secrets (API keys, DB path, JWT secret) loaded from environment variables or `.env` file.
|
||||
- No secrets committed to source control.
|
||||
@@ -0,0 +1,31 @@
|
||||
# OpenRouter API Integration
|
||||
|
||||
## Text Generation
|
||||
|
||||
> [!warning]
|
||||
> TODO: Add more details on how the backend integrates with OpenRouter for text generation, including chat completions and single-prompt generation flows.
|
||||
|
||||
## Image Generation
|
||||
|
||||
Image generation uses two different OpenRouter endpoints depending on the model:
|
||||
|
||||
- **Legacy endpoint** (`/images/generations`): Used by DALL-E 3 and similar models. Returns `data[].url` and `data[].b64_json`.
|
||||
- **Chat completions** (`/chat/completions` with `modalities: ["image"]`): Used by FLUX.2 Klein 4B and GPT-5 Image Mini. Returns `choices[0].message.images[].image_url.url` as base64 data URLs.
|
||||
|
||||
The router auto-detects the model type and routes accordingly. Image configuration (`aspect_ratio`, `image_size`) is passed via `image_config` for chat-based models.
|
||||
|
||||
## Video Generation
|
||||
|
||||
Video generation uses OpenRouter's `/api/v1/videos` endpoint with a **submit-and-poll** pattern orchestrated by a background worker:
|
||||
|
||||
1. User submits a video request via `POST /generate/video` (or `/generate/video/from-image`)
|
||||
2. Backend inserts a row into `generated_videos` with `status: "queued"` and returns immediately
|
||||
3. Background worker (`video_worker.py`) picks up queued jobs every 15 seconds:
|
||||
- Calls `POST /api/v1/videos` with `model`, `prompt`, `aspect_ratio`, `resolution`, `duration`
|
||||
- Receives `{"id": "job_id", "polling_url": "https://..."}` and updates DB to `status: "processing"`
|
||||
- Polls `GET polling_url` every 15 seconds until `status` is `"completed"` or `"failed"`
|
||||
- Updates DB with final status, `video_url`, and any `error` message
|
||||
4. Frontend polls `GET /generate/video/{db_id}/status` every 5 seconds to show live updates
|
||||
5. Completed response includes `video_url` — the video is displayed in a `<video>` element
|
||||
|
||||
Supported models: `openai/sora-2-pro`, `google/veo-3.1-fast`. Both text-to-video and image-to-video use the same `/api/v1/videos` endpoint (image-to-video includes `frame_images` with `first_frame` in the request body).
|
||||
@@ -0,0 +1,46 @@
|
||||
# DuckDB Concurrency and Storage
|
||||
|
||||
## Single Writer Per Process
|
||||
|
||||
DuckDB allows only one process to open the database file in read-write mode at a time. The FastAPI backend must be run with a single worker (`uvicorn --workers 1`). Running multiple workers against the same DuckDB file will cause startup errors.
|
||||
|
||||
## asyncio.Lock for Writes
|
||||
|
||||
All database write operations (`INSERT`, `UPDATE`, `DELETE`) in the FastAPI async context are wrapped in a single `asyncio.Lock` (`get_write_lock()` from `backend/app/db.py`). This prevents concurrent coroutines from issuing overlapping writes within the single process, which would otherwise raise DuckDB optimistic concurrency errors.
|
||||
|
||||
Read operations (`SELECT`) do not require the lock — DuckDB's MVCC provides consistent read snapshots.
|
||||
|
||||
## Schema
|
||||
|
||||
```sql
|
||||
CREATE TABLE users (
|
||||
id UUID DEFAULT uuid() PRIMARY KEY,
|
||||
email VARCHAR NOT NULL UNIQUE,
|
||||
password_hash VARCHAR NOT NULL,
|
||||
role VARCHAR DEFAULT 'user',
|
||||
created_at TIMESTAMP DEFAULT now(),
|
||||
updated_at TIMESTAMP DEFAULT now()
|
||||
);
|
||||
|
||||
CREATE TABLE refresh_tokens (
|
||||
jti UUID DEFAULT uuid() PRIMARY KEY,
|
||||
user_id UUID NOT NULL, -- soft FK to users.id
|
||||
issued_at TIMESTAMP DEFAULT now(),
|
||||
expires_at TIMESTAMP NOT NULL,
|
||||
revoked BOOLEAN DEFAULT false
|
||||
);
|
||||
```
|
||||
|
||||
> The `REFERENCES users(id)` foreign key is intentionally omitted from `refresh_tokens`. DuckDB fires FK checks on `UPDATE` of the parent table (including email changes), causing false constraint violations. Referential integrity is enforced manually: deleting a user also deletes their refresh tokens in the same write transaction.
|
||||
|
||||
## Access Tokens
|
||||
|
||||
Access tokens are **stateless** JWTs — not stored in the database. They are validated by signature and expiry claim only. The short TTL (15 minutes) limits the blast radius if a token is leaked.
|
||||
|
||||
## Refresh Tokens
|
||||
|
||||
Refresh tokens store a JTI (JWT ID) UUID in the `refresh_tokens` table. On each use the old JTI is revoked and a new one issued (rotation). On logout the JTI is immediately revoked. Expired and revoked tokens can be purged via `POST /admin/tokens/purge`.
|
||||
|
||||
## Future: AI Generation History
|
||||
|
||||
AI generation metadata (model, prompt, cost, result URLs) can be stored as JSON columns in a future `generation_history` table in DuckDB, enabling per-user analytics and usage dashboards at zero extra infrastructure cost.
|
||||
@@ -0,0 +1,113 @@
|
||||
# 9. Architecture Decisions
|
||||
|
||||
Important, expensive, large scale or risky architecture decisions including rationales. With "decisions" we mean selecting one alternative based on given criteria.
|
||||
|
||||
Refer to section 4 (Solution Strategy) where the most important decisions are already captured. Avoid redundancy.
|
||||
|
||||
> Consider using [ADRs (Architecture Decision Records)](https://cognitect.com/blog/2011/11/15/documenting-architecture-decisions) for every important decision.
|
||||
|
||||
## ADR-001: Use DuckDB as the embedded database
|
||||
|
||||
**Status:** accepted
|
||||
|
||||
**Context:** The application needs persistent storage for user data. A full RDBMS (PostgreSQL, MySQL) would require a separate server process and adds operational complexity.
|
||||
|
||||
**Decision:** Use DuckDB as an embedded, file-based database accessed in-process by the FastAPI backend.
|
||||
|
||||
**Consequences:** No separate DB server needed. Limited to single-writer access patterns. Suitable for the expected load.
|
||||
|
||||
---
|
||||
|
||||
## ADR-002: Use FastAPI for the backend
|
||||
|
||||
**Status:** accepted
|
||||
|
||||
**Context:** The backend needs async performance for concurrent AI generation requests and automatic OpenAPI documentation.
|
||||
|
||||
**Decision:** Use FastAPI with uvicorn as the ASGI server.
|
||||
|
||||
**Consequences:** Async endpoints enable high concurrency. Auto-generated OpenAPI docs simplify frontend integration and testing.
|
||||
|
||||
---
|
||||
|
||||
## ADR-003: Use Flask for the frontend
|
||||
|
||||
**Status:** accepted
|
||||
|
||||
**Context:** A lightweight server-side rendering layer is needed for the UI with minimal frontend complexity.
|
||||
|
||||
**Decision:** Use Flask with Jinja2 templates to serve HTML pages.
|
||||
|
||||
**Consequences:** Simple, familiar framework. No JavaScript build toolchain required. Frontend calls FastAPI over HTTP.
|
||||
|
||||
---
|
||||
|
||||
## ADR-004: Serialize DuckDB writes with asyncio.Lock
|
||||
|
||||
**Status:** accepted
|
||||
|
||||
**Context:** FastAPI runs async coroutines concurrently in a single process. DuckDB's optimistic concurrency model raises errors when multiple coroutines issue writes simultaneously to the same connection.
|
||||
|
||||
**Decision:** All write operations (`INSERT`, `UPDATE`, `DELETE`) acquire a single process-wide `asyncio.Lock` before executing. The lock is released immediately after the statement completes.
|
||||
|
||||
**Consequences:** Writes are serialised within the process, eliminating concurrency errors. Read performance is unaffected. Throughput is acceptable for the expected user load. If write throughput becomes a bottleneck in future, migrating to PostgreSQL is the preferred path.
|
||||
|
||||
---
|
||||
|
||||
## ADR-005: Use OpenRouter as the unified AI provider gateway
|
||||
|
||||
**Status:** accepted
|
||||
|
||||
**Context:** The application needs access to multiple AI model providers (OpenAI, Anthropic, Stability AI, Runway, etc.) for text, image, and video generation.
|
||||
|
||||
**Decision:** Route all AI generation requests through the [OpenRouter](https://openrouter.ai) API, which exposes an OpenAI-compatible REST interface for hundreds of models.
|
||||
|
||||
**Consequences:** Single API key and base URL for all model providers. Model switching requires only a change to the `model` field in the request payload. If OpenRouter is unavailable, all generation endpoints return `502 Bad Gateway`. Pricing and rate limits are governed by OpenRouter's policies per model.
|
||||
|
||||
---
|
||||
|
||||
## ADR-006: Use submit-and-poll pattern for video generation
|
||||
|
||||
**Status:** accepted
|
||||
|
||||
**Context:** OpenRouter's video generation models (Sora 2 Pro, Veo 3.1 Fast) do not return video URLs immediately. Video generation is a long-running operation (typically 30-120 seconds) that requires polling.
|
||||
|
||||
**Decision:** Use the `/api/v1/videos` endpoint with a two-step pattern: (1) `POST` to submit the job and receive a `polling_url`, (2) `GET` the `polling_url` every 5 seconds until `status` is `"completed"` or `"failed"`. The Flask frontend proxies polling requests via `GET /generate/video/status?polling_url=...` and the frontend JavaScript polls this endpoint automatically.
|
||||
|
||||
**Consequences:** The video generation endpoint returns immediately with `status: "queued"` and a `polling_url`. The frontend displays a "Processing..." message and polls for updates. When complete, the video is displayed in a `<video>` element. This adds complexity to the frontend but is necessary for long-running operations. If OpenRouter's polling endpoint is unavailable, the frontend shows an error after a timeout.
|
||||
|
||||
---
|
||||
|
||||
## ADR-007: Auto-detect image generation model type
|
||||
|
||||
**Status:** accepted
|
||||
|
||||
**Context:** OpenRouter supports image generation through two different endpoints: the legacy `/images/generations` endpoint (DALL-E 3) and the chat completions endpoint with `modalities: ["image"]` (FLUX.2 Klein 4B, GPT-5 Image Mini). These endpoints have different request/response formats.
|
||||
|
||||
**Decision:** The `/generate/image` router auto-detects the model type by checking if the model slug contains `"flux"` or `"gpt-5-image-mini"`. If so, it routes to `/chat/completions` with `modalities: ["image"]` and `image_config` (aspect_ratio, image_size). Otherwise, it uses `/images/generations` with `size` and `n`.
|
||||
|
||||
**Consequences:** Users can specify any image generation model in the form without needing to know which endpoint it uses. The router handles the routing transparently. Adding new image models requires only updating the detection logic if they use a different endpoint.
|
||||
|
||||
---
|
||||
|
||||
## ADR-008: Flask session-based auth with role caching
|
||||
|
||||
**Status:** accepted
|
||||
|
||||
**Context:** The Flask frontend needs to know the user's authentication state and role for route protection (`@login_required`, `@admin_required`) without making an extra API call on every request.
|
||||
|
||||
**Decision:** Store the JWT access token, refresh token, user email, and user role in the Flask server-side session cookie after login. The `@login_required` decorator checks for `access_token` in the session. The `@admin_required` decorator checks `session["user_role"] == "admin"`. This avoids an extra API call to `/users/me` on every request.
|
||||
|
||||
**Consequences:** The user role is cached in the session and may become stale if an admin changes a user's role while the user is logged in. The user must log out and log back in to see the updated role. This is acceptable for the expected usage pattern. The session cookie is signed (Flask's default) to prevent tampering.
|
||||
|
||||
---
|
||||
|
||||
## ADR-009: Separate generation pages in frontend
|
||||
|
||||
**Status:** accepted
|
||||
|
||||
**Context:** The original `/generate` page handled text, image, and video generation in a single form, which became unwieldy as more generation types were added.
|
||||
|
||||
**Decision:** Create separate Flask routes and Jinja2 templates for each generation type: `/generate/text`, `/generate/image`, `/generate/video`. The `/generate` route redirects to `/generate/text`. The navigation bar includes a "Generate" dropdown with links to each sub-page. The video page uses tabs for text-to-video and image-to-video.
|
||||
|
||||
**Consequences:** Each generation type has its own URL, making it bookmarkable and shareable. The navigation is clearer with a dropdown menu. Adding new generation types (e.g., audio) follows the same pattern. The `/generate` redirect provides a sensible default entry point.
|
||||
@@ -0,0 +1,22 @@
|
||||
# Architecture Documentation
|
||||
|
||||
This file is the entry point for the architecture documentation of **All You Can GET AI Biz**.
|
||||
|
||||
The documentation follows the [arc42 template](https://arc42.org/overview) and is split into 12 section files, each covering a specific aspect of the architecture. Read the sections in order for a full picture, or jump directly to the section most relevant to you.
|
||||
|
||||
## Structure
|
||||
|
||||
| Section | File | Description |
|
||||
| ------- | ---------------------------------------------------------------- | ----------------------------------------------------- |
|
||||
| 1 | [1-introduction-and-goals.md](1-introduction-and-goals.md) | Requirements, quality goals, stakeholders |
|
||||
| 2 | [2-constraints.md](2-constraints.md) | Technical, organizational, and convention constraints |
|
||||
| 3 | [3-context-and-scope.md](3-context-and-scope.md) | System boundaries and external interfaces |
|
||||
| 4 | [4-solution-strategy.md](4-solution-strategy.md) | Fundamental technology and design decisions |
|
||||
| 5 | [5-building-block-view.md](5-building-block-view.md) | Static decomposition of the system |
|
||||
| 6 | [6-runtime-view.md](6-runtime-view.md) | Key runtime scenarios and request flows |
|
||||
| 7 | [7-deployment-view.md](7-deployment-view.md) | Infrastructure and deployment topology |
|
||||
| 8 | [8-crosscutting-concepts.md](8-crosscutting-concepts.md) | Security, logging, error handling, configuration |
|
||||
| 9 | [9-architectural-decisions.md](9-architectural-decisions.md) | Architecture Decision Records (ADRs) |
|
||||
| 10 | [10-quality-requirements.md](10-quality-requirements.md) | Quality scenarios and acceptance criteria |
|
||||
| 11 | [11-risks-and-technical-debt.md](11-risks-and-technical-debt.md) | Known risks and technical debt |
|
||||
| 12 | [12-glossary.md](12-glossary.md) | Domain and technical terms |
|
||||
@@ -0,0 +1,19 @@
|
||||
# Testing Strategy
|
||||
|
||||
The FastAPI backend will be tested using a combination of unit and integration tests.
|
||||
|
||||
## Unit Tests
|
||||
|
||||
- Use `pytest` with `pytest-asyncio` for async endpoint handlers.
|
||||
- Mock external dependencies such as the DuckDB connection and the openrouter.ai API.
|
||||
|
||||
## Integration Tests
|
||||
|
||||
- Use `pytest-flask` to spin up the FastAPI app in a test client.
|
||||
- Test full request/response cycles for authentication, user CRUD, and AI generation endpoints.
|
||||
- Use an in‑memory DuckDB instance for database tests.
|
||||
|
||||
## Test Coverage
|
||||
|
||||
- Aim for >80 % coverage on the `app/` package.
|
||||
- Generate coverage reports with `pytest-cov`.
|
||||
@@ -0,0 +1,175 @@
|
||||
# Coolify Deployment Guide
|
||||
|
||||
This guide covers deploying `ai.allucanget.biz` using [Coolify](https://coolify.io) with Nixpacks from the repository `https://git.allucanget.biz/allucanget/ai.allucanget.biz.git`.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
The application consists of two Python services:
|
||||
|
||||
| Service | Framework | Port | Description |
|
||||
| -------- | ----------------- | ----- | ------------------------------------------ |
|
||||
| Backend | FastAPI + uvicorn | 12015 | REST API, auth, AI generation, DuckDB |
|
||||
| Frontend | Flask + gunicorn | 12016 | SSR web UI, session auth, proxy to backend |
|
||||
|
||||
Coolify's built-in reverse proxy routes traffic:
|
||||
|
||||
- `/api/*` → Backend (port 12015)
|
||||
- `/` → Frontend (port 12016)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- A Coolify instance (self-hosted or Cloud)
|
||||
- Git repository pushed to `https://git.allucanget.biz/allucanget/ai.allucanget.biz.git`
|
||||
- Domain configured to point to your Coolify server
|
||||
|
||||
## Step 1: Create Backend Service
|
||||
|
||||
1. In Coolify, click **Add Resource** → **Deploy a new resource** → **Git**
|
||||
2. Connect your Git repository (`git.allucanget.biz`)
|
||||
3. Select the `ai.allucanget.biz` repository
|
||||
4. Choose the `main` branch
|
||||
5. Set **Build Pack** to `nixpacks`
|
||||
6. Set **Base Directory** to `/backend` - this tells Nixpacks to look in the `backend/` subdirectory for `requirements.txt` and the Python application
|
||||
7. Set **Ports Exposed** to `12015`
|
||||
8. Set **Start Command** to:
|
||||
|
||||
```txt
|
||||
uvicorn app.main:app --host 0.0.0.0 --port 12015
|
||||
```
|
||||
|
||||
9. Click **Create Resource**
|
||||
|
||||
> **Important:** Nixpacks copies the **contents** of the Base Directory to `/app/` in the container. When Base Directory is `/backend`, the `backend/` folder wrapper is removed — only `app/`, `tests/`, and `requirements.txt` are copied. Therefore the start command uses `app.main:app` (not `backend.app.main:app`).
|
||||
|
||||
### Backend Environment Variables
|
||||
|
||||
Add these as **Runtime** environment variables in Coolify:
|
||||
|
||||
| Variable | Description | Example |
|
||||
| -------------------- | ------------------------------------ | ------------------------------------ |
|
||||
| `OPENROUTER_API_KEY` | OpenRouter API key for AI generation | `sk-or-v1-...` |
|
||||
| `JWT_SECRET` | Secret key for JWT token signing | Generate with `openssl rand -hex 32` |
|
||||
| `APP_URL` | Public URL of the backend | `https://api.ai.allucanget.biz` |
|
||||
| `APP_NAME` | Application name | `All You Can GET AI` |
|
||||
| `CORS_ORIGINS` | Comma-separated allowed origins | `https://ai.allucanget.biz` |
|
||||
|
||||
## Step 2: Create Frontend Service
|
||||
|
||||
1. In Coolify, click **Add Resource** → **Deploy a new resource** → **Git**
|
||||
2. Select the same repository
|
||||
3. Choose the `main` branch
|
||||
4. Set **Build Pack** to `nixpacks`
|
||||
5. Set **Base Directory** to `/frontend` - this tells Nixpacks to look in the `frontend/` subdirectory for `requirements.txt` and the Python application
|
||||
6. Set **Ports Exposed** to `12016`
|
||||
7. Set **Start Command** to:
|
||||
|
||||
```txt
|
||||
gunicorn app.main:app --bind 0.0.0.0:12016 --workers 2 --timeout 120
|
||||
```
|
||||
|
||||
8. Click **Create Resource**
|
||||
|
||||
> **Note:** Nixpacks will automatically detect and install only the production dependencies from `requirements.txt`.
|
||||
> **Important:** Nixpacks copies the **contents** of the Base Directory to `/app/` in the container. When Base Directory is `/frontend`, the `frontend/` folder wrapper is removed — only `app/`, `tests/`, and `requirements.txt` are copied. Therefore the start command uses `app.main:app` (not `frontend.app.main:app`).
|
||||
|
||||
### Frontend Environment Variables
|
||||
|
||||
Add these as **Runtime** environment variables in Coolify:
|
||||
|
||||
| Variable | Description | Example |
|
||||
| ------------------ | ----------------------------------------- | --------------------------------------------------------------- |
|
||||
| `FLASK_SECRET_KEY` | Flask session cookie signing key | Generate with `openssl rand -hex 32` |
|
||||
| `BACKEND_URL` | Internal URL to reach the backend service | `http://localhost:12015` (or use Coolify's internal networking) |
|
||||
|
||||
## Step 3: Configure Reverse Proxy
|
||||
|
||||
Coolify provides a built-in reverse proxy. Configure routing rules:
|
||||
|
||||
### Backend Proxy Rules
|
||||
|
||||
- **Domain**: `api.ai.allucanget.biz` (or subdomain of your choice)
|
||||
- **Port**: `12015`
|
||||
- **Path**: `/api/*` → forward to backend
|
||||
|
||||
### Frontend Proxy Rules
|
||||
|
||||
- **Domain**: `ai.allucanget.biz`
|
||||
- **Port**: `12016`
|
||||
- **Path**: `/` → forward to frontend
|
||||
|
||||
## Step 4: SSL/TLS
|
||||
|
||||
Enable HTTPS in Coolify for both services:
|
||||
|
||||
1. Go to each service's settings
|
||||
2. Enable **Auto HTTPS** (Let's Encrypt)
|
||||
3. Configure domain names
|
||||
4. Coolify automatically handles certificate renewal
|
||||
|
||||
## Step 5: Persistent Storage (Optional)
|
||||
|
||||
If you want to persist DuckDB data:
|
||||
|
||||
1. In Coolify, go to the **Backend** service
|
||||
2. Navigate to **Persistent Storage**
|
||||
3. Add a volume mount:
|
||||
- **Host Path**: `/data` (or any path on the host)
|
||||
- **Container Path**: `/app/data`
|
||||
- **Type**: `Bind Mount` or `Volume`
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Backend healthcheck stays unhealthy
|
||||
|
||||
- Check backend logs in Coolify
|
||||
- Verify `OPENROUTER_API_KEY` and `JWT_SECRET` are set
|
||||
- Verify volume mount at `/app/data` is writable
|
||||
|
||||
### Backend won't start
|
||||
|
||||
- Check that `OPENROUTER_API_KEY` is set
|
||||
- Verify `JWT_SECRET` is a sufficiently long random string
|
||||
- Check logs in Coolify's **Logs** tab
|
||||
|
||||
### Frontend can't reach backend
|
||||
|
||||
- Ensure `BACKEND_URL` points to the correct internal URL
|
||||
- If both services are on the same Coolify server, use `http://localhost:12015`
|
||||
- Check that the backend service is running and healthy
|
||||
|
||||
### CORS errors
|
||||
|
||||
- Set `CORS_ORIGINS` to include your frontend domain
|
||||
- Example: `https://ai.allucanget.biz`
|
||||
|
||||
### Nixpacks build fails
|
||||
|
||||
- Verify the base directory is correct (`/backend` or `/frontend`)
|
||||
- Check that `requirements.txt` exists in the base directory
|
||||
- Review build logs in Coolify
|
||||
|
||||
## Environment Variable Summary
|
||||
|
||||
All required environment variables:
|
||||
|
||||
| Variable | Service | Required |
|
||||
| -------------------- | -------- | ------------------------------------- |
|
||||
| `OPENROUTER_API_KEY` | Backend | Yes |
|
||||
| `JWT_SECRET` | Backend | Yes |
|
||||
| `APP_URL` | Backend | Yes |
|
||||
| `APP_NAME` | Backend | No (defaults to "All You Can GET AI") |
|
||||
| `CORS_ORIGINS` | Backend | Yes |
|
||||
| `FLASK_SECRET_KEY` | Frontend | Yes |
|
||||
| `BACKEND_URL` | Frontend | Yes |
|
||||
|
||||
## Deployment Checklist
|
||||
|
||||
- [ ] Repository pushed to Git
|
||||
- [ ] Backend service created with correct base directory (`/backend`)
|
||||
- [ ] Backend environment variables configured
|
||||
- [ ] Frontend service created with correct base directory (`/frontend`)
|
||||
- [ ] Frontend environment variables configured
|
||||
- [ ] SSL certificates enabled
|
||||
- [ ] Domain names configured
|
||||
- [ ] Health checks passing
|
||||
- [ ] Logs reviewed for errors
|
||||
@@ -0,0 +1,21 @@
|
||||
FROM python:3.12-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install system dependencies
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
gcc \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy requirements and install Python dependencies
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy application code
|
||||
COPY . .
|
||||
|
||||
# Expose port
|
||||
EXPOSE 12016
|
||||
|
||||
# Run the application
|
||||
CMD ["gunicorn", "app.main:app", "--bind", "0.0.0.0:12016", "--workers", "2", "--timeout", "120"]
|
||||
@@ -0,0 +1,10 @@
|
||||
"""Flask frontend configuration."""
|
||||
import os
|
||||
|
||||
|
||||
class Config:
|
||||
SECRET_KEY = os.getenv(
|
||||
"FLASK_SECRET_KEY", "dev-secret-change-in-production")
|
||||
BACKEND_URL = os.getenv("BACKEND_URL", "http://localhost:12015")
|
||||
SESSION_COOKIE_HTTPONLY = True
|
||||
SESSION_COOKIE_SAMESITE = "Lax"
|
||||
@@ -0,0 +1,587 @@
|
||||
"""Flask frontend application."""
|
||||
import functools
|
||||
from datetime import datetime, timezone
|
||||
|
||||
import httpx
|
||||
from flask import (
|
||||
Flask,
|
||||
Response,
|
||||
flash,
|
||||
jsonify,
|
||||
redirect,
|
||||
render_template,
|
||||
request,
|
||||
session,
|
||||
url_for,
|
||||
)
|
||||
|
||||
from .config import Config
|
||||
|
||||
app = Flask(__name__)
|
||||
app.config.from_object(Config)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
@app.template_filter("fromisoformat")
|
||||
def from_iso_format(s: str) -> datetime:
|
||||
"""Convert ISO 8601 string to datetime object."""
|
||||
return datetime.fromisoformat(s)
|
||||
|
||||
|
||||
@app.template_filter("humantime")
|
||||
def human_time(dt: datetime) -> str:
|
||||
"""Format a datetime object into a human-readable relative time."""
|
||||
now = datetime.now(timezone.utc)
|
||||
# Ensure dt is aware for comparison
|
||||
if dt.tzinfo is None:
|
||||
dt = dt.replace(tzinfo=timezone.utc)
|
||||
|
||||
diff = now - dt
|
||||
seconds = diff.total_seconds()
|
||||
|
||||
if seconds < 60:
|
||||
return "just now"
|
||||
elif seconds < 3600:
|
||||
minutes = int(seconds / 60)
|
||||
return f"{minutes} minute{'s' if minutes > 1 else ''} ago"
|
||||
elif seconds < 86400:
|
||||
hours = int(seconds / 3600)
|
||||
return f"{hours} hour{'s' if hours > 1 else ''} ago"
|
||||
elif seconds < 2592000:
|
||||
days = int(seconds / 86400)
|
||||
return f"{days} day{'s' if days > 1 else ''} ago"
|
||||
elif seconds < 31536000:
|
||||
months = int(seconds / 2592000)
|
||||
return f"{months} month{'s' if months > 1 else ''} ago"
|
||||
else:
|
||||
years = int(seconds / 31536000)
|
||||
return f"{years} year{'s' if years > 1 else ''} ago"
|
||||
|
||||
|
||||
def _backend(path: str) -> str:
|
||||
return f"{app.config['BACKEND_URL']}{path}"
|
||||
|
||||
|
||||
def _api(method: str, path: str, *, token: str | None = None, **kwargs):
|
||||
headers = kwargs.pop("headers", {})
|
||||
if token:
|
||||
headers["Authorization"] = f"Bearer {token}"
|
||||
return httpx.request(method, _backend(path), headers=headers, timeout=30, **kwargs)
|
||||
|
||||
|
||||
def _model_matches_modality(model: dict, modality: str) -> bool:
|
||||
"""Heuristic fallback when backend modality filter returns empty."""
|
||||
model_modality = (model.get("modality") or "").lower()
|
||||
if model_modality == modality:
|
||||
return True
|
||||
|
||||
text = f"{model.get('id', '')} {model.get('name', '')}".lower()
|
||||
keywords = {
|
||||
"image": ["image", "dall-e", "flux", "stable-diffusion", "sdxl", "recraft", "ideogram", "gpt-image"],
|
||||
"video": ["video", "sora", "runway", "veo", "kling", "pika", "luma", "wan"],
|
||||
"audio": ["audio", "speech", "voice", "tts", "transcribe", "whisper"],
|
||||
}
|
||||
|
||||
if modality in keywords:
|
||||
return any(k in text for k in keywords[modality])
|
||||
|
||||
if modality == "text":
|
||||
non_text_hits = any(
|
||||
k in text for k in keywords["image"] + keywords["video"] + keywords["audio"])
|
||||
return not non_text_hits
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def _load_models(token: str, modality: str) -> list[dict]:
|
||||
"""Load models for modality; fallback to unfiltered cache if needed."""
|
||||
try:
|
||||
models_resp = _api("GET", "/models/", token=token,
|
||||
params={"modality": modality})
|
||||
except httpx.RequestError:
|
||||
return []
|
||||
if models_resp.status_code == 200:
|
||||
try:
|
||||
models = models_resp.json()
|
||||
except ValueError:
|
||||
models = []
|
||||
if models:
|
||||
return models
|
||||
|
||||
try:
|
||||
all_resp = _api("GET", "/models/", token=token)
|
||||
except httpx.RequestError:
|
||||
return []
|
||||
if all_resp.status_code != 200:
|
||||
return []
|
||||
|
||||
try:
|
||||
all_models = all_resp.json()
|
||||
except ValueError:
|
||||
return []
|
||||
filtered = [m for m in all_models if _model_matches_modality(m, modality)]
|
||||
return filtered or all_models
|
||||
|
||||
|
||||
def login_required(view):
|
||||
@functools.wraps(view)
|
||||
def wrapped(*args, **kwargs):
|
||||
if "access_token" not in session:
|
||||
return redirect(url_for("login"))
|
||||
return view(*args, **kwargs)
|
||||
return wrapped
|
||||
|
||||
|
||||
def admin_required(view):
|
||||
@functools.wraps(view)
|
||||
def wrapped(*args, **kwargs):
|
||||
if "access_token" not in session:
|
||||
return redirect(url_for("login"))
|
||||
if session.get("user_role") != "admin":
|
||||
flash("Admin access required.", "error")
|
||||
return redirect(url_for("dashboard"))
|
||||
return view(*args, **kwargs)
|
||||
return wrapped
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Auth routes
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
@app.get("/")
|
||||
def index():
|
||||
if "access_token" in session:
|
||||
return redirect(url_for("dashboard"))
|
||||
return redirect(url_for("login"))
|
||||
|
||||
|
||||
@app.route("/login", methods=["GET", "POST"])
|
||||
def login():
|
||||
if request.method == "POST":
|
||||
email = request.form["email"]
|
||||
password = request.form["password"]
|
||||
resp = _api("POST", "/auth/login",
|
||||
json={"email": email, "password": password})
|
||||
if resp.status_code == 200:
|
||||
data = resp.json()
|
||||
session["access_token"] = data["access_token"]
|
||||
session["refresh_token"] = data["refresh_token"]
|
||||
me = _api("GET", "/users/me", token=data["access_token"])
|
||||
if me.status_code == 200:
|
||||
u = me.json()
|
||||
session["user_email"] = u.get("email", "")
|
||||
session["user_role"] = u.get("role", "user")
|
||||
return redirect(url_for("dashboard"))
|
||||
flash("Invalid email or password.", "error")
|
||||
return render_template("login.html")
|
||||
|
||||
|
||||
@app.route("/register", methods=["GET", "POST"])
|
||||
def register():
|
||||
if request.method == "POST":
|
||||
email = request.form["email"]
|
||||
password = request.form["password"]
|
||||
resp = _api("POST", "/auth/register",
|
||||
json={"email": email, "password": password})
|
||||
if resp.status_code == 201:
|
||||
flash("Account created. Please log in.", "success")
|
||||
return redirect(url_for("login"))
|
||||
detail = resp.json().get("detail", "Registration failed.")
|
||||
flash(detail, "error")
|
||||
return render_template("register.html")
|
||||
|
||||
|
||||
@app.get("/logout")
|
||||
def logout():
|
||||
refresh_token = session.get("refresh_token")
|
||||
if refresh_token:
|
||||
_api("POST", "/auth/logout", json={"refresh_token": refresh_token})
|
||||
session.clear()
|
||||
return redirect(url_for("login"))
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Authenticated routes
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
@app.get("/dashboard")
|
||||
@login_required
|
||||
def dashboard():
|
||||
token = session["access_token"]
|
||||
resp = _api("GET", "/users/me", token=token)
|
||||
user = resp.json() if resp.status_code == 200 else {}
|
||||
img_resp = _api("GET", "/images/", token=token)
|
||||
images = img_resp.json() if img_resp.status_code == 200 else []
|
||||
gen_resp = _api("GET", "/generate/images", token=token)
|
||||
generated_images = gen_resp.json() if gen_resp.status_code == 200 else []
|
||||
|
||||
vid_resp = _api("GET", "/generate/videos", token=token)
|
||||
videos = vid_resp.json() if vid_resp.status_code == 200 else []
|
||||
pending_videos = [v for v in videos if v.get(
|
||||
"status") not in ("completed", "failed")]
|
||||
completed_videos = [v for v in videos if v.get("status") == "completed"]
|
||||
|
||||
return render_template("dashboard.html", user=user, images=images,
|
||||
generated_images=generated_images,
|
||||
pending_videos=pending_videos,
|
||||
completed_videos=completed_videos)
|
||||
|
||||
|
||||
@app.get("/gallery")
|
||||
@login_required
|
||||
def gallery():
|
||||
token = session["access_token"]
|
||||
|
||||
# Fetch all content types
|
||||
uploads_resp = _api("GET", "/images/", token=token)
|
||||
uploads = uploads_resp.json() if uploads_resp.status_code == 200 else []
|
||||
|
||||
gen_images_resp = _api("GET", "/generate/images", token=token)
|
||||
generated_images = gen_images_resp.json(
|
||||
) if gen_images_resp.status_code == 200 else []
|
||||
|
||||
videos_resp = _api("GET", "/generate/videos", token=token)
|
||||
videos = videos_resp.json() if videos_resp.status_code == 200 else []
|
||||
|
||||
# Separate pending videos
|
||||
pending_videos = [v for v in videos if v.get(
|
||||
"status") not in ("completed", "failed")]
|
||||
completed_videos = [v for v in videos if v.get("status") == "completed"]
|
||||
|
||||
return render_template(
|
||||
"gallery.html",
|
||||
uploads=uploads,
|
||||
generated_images=generated_images,
|
||||
pending_videos=pending_videos,
|
||||
completed_videos=completed_videos,
|
||||
)
|
||||
|
||||
|
||||
@app.get("/gallery/image/<image_id>")
|
||||
@login_required
|
||||
def image_detail(image_id: str):
|
||||
token = session["access_token"]
|
||||
resp = _api("GET", f"/generate/images/{image_id}", token=token)
|
||||
image = resp.json() if resp.status_code == 200 else None
|
||||
return render_template("image_detail.html", image=image)
|
||||
|
||||
|
||||
@app.get("/gallery/video/<video_id>")
|
||||
@login_required
|
||||
def video_detail(video_id: str):
|
||||
token = session["access_token"]
|
||||
resp = _api("GET", f"/generate/videos/{video_id}", token=token)
|
||||
video = resp.json() if resp.status_code == 200 else None
|
||||
return render_template("video_detail.html", video=video)
|
||||
|
||||
|
||||
@app.get("/gallery/upload/<image_id>")
|
||||
@login_required
|
||||
def upload_detail(image_id: str):
|
||||
token = session["access_token"]
|
||||
resp = _api("GET", f"/images/{image_id}", token=token)
|
||||
image = resp.json() if resp.status_code == 200 else None
|
||||
return render_template("upload_detail.html", image=image)
|
||||
|
||||
|
||||
# ── Generate ──────────────────────────────────────────────────────────────
|
||||
|
||||
@app.get("/images/<image_id>/file")
|
||||
@login_required
|
||||
def serve_uploaded_image(image_id: str):
|
||||
resp = _api("GET", f"/images/{image_id}/file",
|
||||
token=session["access_token"])
|
||||
if resp.status_code != 200:
|
||||
return Response("Not found", status=404)
|
||||
return Response(
|
||||
resp.content,
|
||||
status=200,
|
||||
content_type=resp.headers.get("content-type", "image/jpeg"),
|
||||
)
|
||||
|
||||
|
||||
@app.get("/generate")
|
||||
@login_required
|
||||
def generate():
|
||||
return redirect(url_for("generate_text"))
|
||||
|
||||
|
||||
@app.route("/generate/text", methods=["GET", "POST"])
|
||||
@login_required
|
||||
def generate_text():
|
||||
error = None
|
||||
token = session["access_token"]
|
||||
chat_history: list[dict] = session.get("chat_history", [])
|
||||
system_prompt: str = session.get("chat_system_prompt", "")
|
||||
model: str = session.get("chat_model", "")
|
||||
|
||||
if request.method == "POST":
|
||||
action = request.form.get("action", "send")
|
||||
|
||||
if action == "clear":
|
||||
session.pop("chat_history", None)
|
||||
session.pop("chat_system_prompt", None)
|
||||
session.pop("chat_model", None)
|
||||
return redirect(url_for("generate_text"))
|
||||
|
||||
prompt = request.form.get("prompt", "").strip()
|
||||
model = request.form.get("model", "").strip()
|
||||
system_prompt = request.form.get("system_prompt", "").strip()
|
||||
|
||||
# Persist model + system_prompt across turns
|
||||
session["chat_model"] = model
|
||||
session["chat_system_prompt"] = system_prompt
|
||||
|
||||
if prompt:
|
||||
# Build messages: history (user/assistant only) + new user msg
|
||||
messages = [m for m in chat_history if m["role"]
|
||||
in ("user", "assistant")]
|
||||
messages.append({"role": "user", "content": prompt})
|
||||
|
||||
payload: dict = {
|
||||
"model": model,
|
||||
"messages": [{"role": m["role"], "content": m["content"]} for m in messages],
|
||||
}
|
||||
if system_prompt:
|
||||
payload["system_prompt"] = system_prompt
|
||||
|
||||
resp = _api("POST", "/generate/text", token=token, json=payload)
|
||||
if resp.status_code == 200:
|
||||
data = resp.json()
|
||||
chat_history = list(messages)
|
||||
chat_history.append({"role": "assistant", "content": data["content"],
|
||||
"usage": data.get("usage")})
|
||||
session["chat_history"] = chat_history
|
||||
else:
|
||||
try:
|
||||
error = resp.json().get("detail", "Generation failed.")
|
||||
except Exception:
|
||||
error = "Generation failed."
|
||||
|
||||
models = _load_models(token, "text")
|
||||
return render_template(
|
||||
"generate_text.html",
|
||||
chat_history=session.get("chat_history", []),
|
||||
error=error,
|
||||
models=models,
|
||||
system_prompt=system_prompt,
|
||||
current_model=model,
|
||||
)
|
||||
|
||||
|
||||
@app.route("/generate/image", methods=["GET", "POST"])
|
||||
@login_required
|
||||
def generate_image():
|
||||
result = error = None
|
||||
token = session["access_token"]
|
||||
if request.method == "POST":
|
||||
# Upload reference image if provided
|
||||
ref_file = request.files.get("reference_image")
|
||||
if ref_file and ref_file.filename:
|
||||
up_resp = _api(
|
||||
"POST", "/images/upload",
|
||||
token=token,
|
||||
files={"file": (ref_file.filename,
|
||||
ref_file.stream, ref_file.content_type)},
|
||||
)
|
||||
if up_resp.status_code not in (200, 201):
|
||||
error = up_resp.json().get("detail", "Image upload failed.")
|
||||
models = _load_models(token, "image")
|
||||
return render_template("generate_image.html", result=result, error=error, models=models)
|
||||
|
||||
resp = _api("POST", "/generate/image", token=token, json={
|
||||
"model": request.form.get("model", "").strip(),
|
||||
"prompt": request.form.get("prompt", "").strip(),
|
||||
"n": int(request.form.get("n", 1)),
|
||||
"size": request.form.get("size", "1024x1024"),
|
||||
"aspect_ratio": request.form.get("aspect_ratio", "").strip() or None,
|
||||
"image_size": request.form.get("image_size", "").strip() or None,
|
||||
})
|
||||
if resp.status_code == 200:
|
||||
result = resp.json()
|
||||
else:
|
||||
error = resp.json().get("detail", "Generation failed.")
|
||||
models = _load_models(token, "image")
|
||||
return render_template("generate_image.html", result=result, error=error, models=models)
|
||||
|
||||
|
||||
@app.route("/generate/video", methods=["GET", "POST"])
|
||||
@login_required
|
||||
def generate_video():
|
||||
error = None
|
||||
token = session["access_token"]
|
||||
if request.method == "POST":
|
||||
mode = request.form.get("mode", "text")
|
||||
duration_raw = request.form.get("duration_seconds", "")
|
||||
duration = int(
|
||||
duration_raw) if duration_raw.strip().isdigit() else None
|
||||
resolution = request.form.get("resolution", "").strip() or None
|
||||
|
||||
if mode == "image":
|
||||
resp = _api("POST", "/generate/video/from-image", token=token, json={
|
||||
"model": request.form.get("model", "").strip(),
|
||||
"image_url": request.form.get("image_url", "").strip(),
|
||||
"prompt": request.form.get("prompt", "").strip(),
|
||||
"aspect_ratio": request.form.get("aspect_ratio", "16:9"),
|
||||
"duration_seconds": duration,
|
||||
"resolution": resolution,
|
||||
})
|
||||
else:
|
||||
resp = _api("POST", "/generate/video", token=token, json={
|
||||
"model": request.form.get("model", "").strip(),
|
||||
"prompt": request.form.get("prompt", "").strip(),
|
||||
"aspect_ratio": request.form.get("aspect_ratio", "16:9"),
|
||||
"duration_seconds": duration,
|
||||
"resolution": resolution,
|
||||
})
|
||||
|
||||
if resp.status_code == 200:
|
||||
result = resp.json()
|
||||
# On success, redirect to the detail page to monitor progress
|
||||
db_id = result.get("db_id")
|
||||
if db_id:
|
||||
return redirect(url_for("video_detail", video_id=db_id))
|
||||
# Fallback for older backend versions
|
||||
flash("Video job started.", "success")
|
||||
return redirect(url_for("gallery"))
|
||||
else:
|
||||
error = resp.json().get("detail", "Generation failed.")
|
||||
|
||||
models = _load_models(token, "video")
|
||||
return render_template("generate_video.html", error=error, models=models)
|
||||
|
||||
|
||||
@app.get("/generate/video/status")
|
||||
@login_required
|
||||
def generate_video_status():
|
||||
"""Proxy video status polling to the backend."""
|
||||
polling_url = request.args.get("polling_url", "")
|
||||
if not polling_url:
|
||||
return jsonify({"error": "polling_url required"}), 400
|
||||
resp = _api(
|
||||
"GET", "/generate/video/status",
|
||||
token=session["access_token"],
|
||||
params={"polling_url": polling_url},
|
||||
)
|
||||
return jsonify(resp.json()), resp.status_code
|
||||
|
||||
|
||||
@app.get("/generate/video/<video_id>/status")
|
||||
@login_required
|
||||
def generate_video_db_status(video_id: str):
|
||||
"""Return current DB status for a video job (polled by frontend JS)."""
|
||||
resp = _api(
|
||||
"GET", f"/generate/videos/{video_id}", token=session["access_token"])
|
||||
return jsonify(resp.json()), resp.status_code
|
||||
|
||||
|
||||
@app.post("/generate/video/<video_id>/cancel")
|
||||
@login_required
|
||||
def cancel_video_job(video_id: str):
|
||||
"""Proxy cancel request to backend."""
|
||||
resp = _api(
|
||||
"POST", f"/generate/videos/{video_id}/cancel", token=session["access_token"])
|
||||
return jsonify(resp.json()), resp.status_code
|
||||
|
||||
|
||||
# ── Admin ─────────────────────────────────────────────────────────────────
|
||||
|
||||
@app.get("/admin")
|
||||
@admin_required
|
||||
def admin():
|
||||
token = session["access_token"]
|
||||
stats_resp = _api("GET", "/admin/stats", token=token)
|
||||
users_resp = _api("GET", "/users", token=token)
|
||||
stats = stats_resp.json() if stats_resp.status_code == 200 else {}
|
||||
users = users_resp.json() if users_resp.status_code == 200 else []
|
||||
return render_template("admin.html", stats=stats, users=users)
|
||||
|
||||
|
||||
@app.post("/admin/users/<user_id>/role")
|
||||
@admin_required
|
||||
def admin_set_role(user_id: str):
|
||||
role = request.form.get("role", "user")
|
||||
_api("PUT", f"/users/{user_id}/role",
|
||||
token=session["access_token"], json={"role": role})
|
||||
flash(f"Role updated to '{role}'.", "success")
|
||||
return redirect(url_for("admin"))
|
||||
|
||||
|
||||
@app.post("/admin/users/<user_id>/delete")
|
||||
@admin_required
|
||||
def admin_delete_user(user_id: str):
|
||||
_api("DELETE", f"/users/{user_id}", token=session["access_token"])
|
||||
flash("User deleted.", "success")
|
||||
return redirect(url_for("admin"))
|
||||
|
||||
|
||||
@app.get("/admin/models")
|
||||
@admin_required
|
||||
def admin_models():
|
||||
"""Show model cache status and list all models."""
|
||||
return render_template("admin/models.html")
|
||||
|
||||
|
||||
# ── Admin API proxies (same-origin for browser JS, avoids mixed-content) ──
|
||||
|
||||
@app.get("/api/admin/videos")
|
||||
@admin_required
|
||||
def api_admin_list_videos():
|
||||
resp = _api("GET", "/admin/videos", token=session["access_token"])
|
||||
return jsonify(resp.json()), resp.status_code
|
||||
|
||||
|
||||
@app.post("/api/admin/videos/<job_id>/retry")
|
||||
@admin_required
|
||||
def api_admin_retry_video(job_id: str):
|
||||
resp = _api(
|
||||
"POST", f"/admin/videos/{job_id}/retry", token=session["access_token"])
|
||||
return jsonify(resp.json()), resp.status_code
|
||||
|
||||
|
||||
@app.post("/api/admin/videos/<job_id>/cancel")
|
||||
@admin_required
|
||||
def api_admin_cancel_video(job_id: str):
|
||||
resp = _api(
|
||||
"POST", f"/admin/videos/{job_id}/cancel", token=session["access_token"])
|
||||
return jsonify(resp.json()), resp.status_code
|
||||
|
||||
|
||||
@app.delete("/api/admin/videos/<job_id>")
|
||||
@admin_required
|
||||
def api_admin_delete_video(job_id: str):
|
||||
resp = _api(
|
||||
"DELETE", f"/admin/videos/{job_id}", token=session["access_token"])
|
||||
return jsonify(resp.json()), resp.status_code
|
||||
|
||||
|
||||
# ── Profile ───────────────────────────────────────────────────────────────
|
||||
|
||||
@app.route("/users/profile", methods=["GET", "POST"])
|
||||
@login_required
|
||||
def profile():
|
||||
token = session["access_token"]
|
||||
if request.method == "POST":
|
||||
payload: dict = {}
|
||||
new_email = request.form.get("email", "").strip()
|
||||
new_password = request.form.get("password", "").strip()
|
||||
if new_email:
|
||||
payload["email"] = new_email
|
||||
if new_password:
|
||||
payload["password"] = new_password
|
||||
if payload:
|
||||
resp = _api("PUT", "/users/me", token=token, json=payload)
|
||||
if resp.status_code == 200:
|
||||
updated = resp.json()
|
||||
session["user_email"] = updated.get(
|
||||
"email", session.get("user_email", ""))
|
||||
flash("Profile updated.", "success")
|
||||
else:
|
||||
flash(resp.json().get("detail", "Update failed."), "error")
|
||||
return redirect(url_for("profile"))
|
||||
resp = _api("GET", "/users/me", token=token)
|
||||
user = resp.json() if resp.status_code == 200 else {}
|
||||
return render_template("profile.html", user=user)
|
||||
@@ -0,0 +1,174 @@
|
||||
document.addEventListener("DOMContentLoaded", () => {
|
||||
// ── Loading overlay ────────────────────────────────────
|
||||
const overlay = document.getElementById("loading-overlay");
|
||||
|
||||
document.querySelectorAll("form").forEach((form) => {
|
||||
form.addEventListener("submit", () => {
|
||||
if (overlay) overlay.classList.add("active");
|
||||
});
|
||||
});
|
||||
|
||||
// ── Hamburger menu ─────────────────────────────────────
|
||||
const hamburger = document.querySelector(".hamburger");
|
||||
const navLinks = document.querySelector(".nav-links");
|
||||
|
||||
if (hamburger && navLinks) {
|
||||
hamburger.addEventListener("click", () => {
|
||||
navLinks.classList.toggle("open");
|
||||
});
|
||||
}
|
||||
|
||||
// ── Image upload preview ───────────────────────────────
|
||||
const imageInput = document.getElementById("reference_image");
|
||||
const imagePreviewWrap = document.getElementById("image-upload-preview");
|
||||
const imagePreview = document.getElementById("image-upload-preview-img");
|
||||
const imageFilename = document.getElementById("image-upload-filename");
|
||||
|
||||
if (imageInput && imagePreviewWrap && imagePreview && imageFilename) {
|
||||
imageInput.addEventListener("change", () => {
|
||||
const file = imageInput.files && imageInput.files[0];
|
||||
if (!file) {
|
||||
imagePreviewWrap.hidden = true;
|
||||
imagePreview.removeAttribute("src");
|
||||
imageFilename.textContent = "";
|
||||
return;
|
||||
}
|
||||
|
||||
imagePreview.src = URL.createObjectURL(file);
|
||||
imageFilename.textContent = file.name;
|
||||
imagePreviewWrap.hidden = false;
|
||||
});
|
||||
}
|
||||
|
||||
// ── Generate dropdown tabs ─────────────────────────────
|
||||
document.querySelectorAll(".tab-btn").forEach((btn) => {
|
||||
btn.addEventListener("click", () => {
|
||||
const target = btn.dataset.tab;
|
||||
const container = btn.closest(".tabs-container");
|
||||
if (!container) return;
|
||||
|
||||
container
|
||||
.querySelectorAll(".tab-btn")
|
||||
.forEach((b) => b.classList.remove("active"));
|
||||
container
|
||||
.querySelectorAll(".tab-panel")
|
||||
.forEach((p) => p.classList.remove("active"));
|
||||
|
||||
btn.classList.add("active");
|
||||
const panel = container.querySelector(`#tab-${target}`);
|
||||
if (panel) panel.classList.add("active");
|
||||
});
|
||||
});
|
||||
|
||||
// ── Video status polling ───────────────────────────────
|
||||
const pollDiv = document.getElementById("video-poll-status");
|
||||
if (pollDiv) {
|
||||
const videoId = pollDiv.dataset.videoId;
|
||||
const statusText = document.getElementById("poll-status-text");
|
||||
const videoContainer = document.getElementById("poll-video-container");
|
||||
const cancelBtn = document.getElementById("cancel-video-btn");
|
||||
const cancelMsg = document.getElementById("cancel-msg");
|
||||
const MAX_POLLS = 120; // ~10 minutes at 5s interval
|
||||
let pollCount = 0;
|
||||
let interval = null;
|
||||
|
||||
const stopPolling = () => {
|
||||
if (interval) {
|
||||
clearInterval(interval);
|
||||
interval = null;
|
||||
}
|
||||
};
|
||||
|
||||
if (cancelBtn) {
|
||||
cancelBtn.addEventListener("click", async () => {
|
||||
cancelBtn.disabled = true;
|
||||
cancelBtn.textContent = "Cancelling…";
|
||||
try {
|
||||
const resp = await fetch(
|
||||
"/generate/video/" + encodeURIComponent(videoId) + "/cancel",
|
||||
{ method: "POST" },
|
||||
);
|
||||
if (resp.ok) {
|
||||
stopPolling();
|
||||
cancelBtn.classList.add("hidden");
|
||||
if (cancelMsg) {
|
||||
cancelMsg.textContent = "Job cancelled.";
|
||||
cancelMsg.classList.remove("hidden", "text-red-500");
|
||||
cancelMsg.classList.add("text-gray-300");
|
||||
}
|
||||
if (statusText) {
|
||||
statusText.innerHTML = "Status: <strong>cancelled</strong>";
|
||||
}
|
||||
} else {
|
||||
const data = await resp.json().catch(() => ({}));
|
||||
cancelBtn.disabled = false;
|
||||
cancelBtn.textContent = "Cancel Job";
|
||||
if (cancelMsg) {
|
||||
cancelMsg.textContent = data.detail || "Cancel failed.";
|
||||
cancelMsg.classList.remove("hidden");
|
||||
cancelMsg.classList.add("text-red-500");
|
||||
}
|
||||
}
|
||||
} catch (e) {
|
||||
cancelBtn.disabled = false;
|
||||
cancelBtn.textContent = "Cancel Job";
|
||||
if (cancelMsg) {
|
||||
cancelMsg.textContent = "Network error.";
|
||||
cancelMsg.classList.remove("hidden");
|
||||
cancelMsg.classList.add("text-red-500");
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
interval = setInterval(async () => {
|
||||
try {
|
||||
pollCount++;
|
||||
if (pollCount > MAX_POLLS) {
|
||||
stopPolling();
|
||||
pollDiv.innerHTML =
|
||||
'<div class="alert alert-warning">Polling timed out. Please refresh the page to check status.</div>';
|
||||
return;
|
||||
}
|
||||
const resp = await fetch(
|
||||
"/generate/video/" + encodeURIComponent(videoId) + "/status",
|
||||
);
|
||||
if (!resp.ok) return;
|
||||
const data = await resp.json();
|
||||
|
||||
if (statusText) {
|
||||
statusText.innerHTML = "Status: <strong>" + data.status + "</strong>";
|
||||
}
|
||||
|
||||
if (data.status === "completed") {
|
||||
stopPolling();
|
||||
if (data.video_url) {
|
||||
if (videoContainer) {
|
||||
const vid = document.createElement("video");
|
||||
vid.src = data.video_url;
|
||||
vid.controls = true;
|
||||
vid.className = "generated-video";
|
||||
videoContainer.appendChild(vid);
|
||||
const msg = pollDiv.querySelector("p");
|
||||
if (msg) msg.textContent = "Video ready!";
|
||||
} else {
|
||||
// video_detail page: reload to show the video element
|
||||
window.location.reload();
|
||||
}
|
||||
}
|
||||
} else if (data.status === "failed") {
|
||||
stopPolling();
|
||||
pollDiv.innerHTML =
|
||||
'<div class="alert alert-error">Generation failed.</div>';
|
||||
} else if (data.status === "cancelled") {
|
||||
stopPolling();
|
||||
if (cancelBtn) cancelBtn.classList.add("hidden");
|
||||
pollDiv.innerHTML =
|
||||
'<div class="alert alert-info">Job was cancelled.</div>';
|
||||
}
|
||||
} catch (e) {
|
||||
console.error("Video polling error:", e);
|
||||
}
|
||||
}, 5000);
|
||||
}
|
||||
});
|
||||
@@ -0,0 +1,821 @@
|
||||
@import url("https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&display=swap");
|
||||
|
||||
:root {
|
||||
--bg: #0f1117;
|
||||
--surface: #1a1d27;
|
||||
--surface-2: #22263a;
|
||||
--border: #2e3250;
|
||||
--text: #e8eaf6;
|
||||
--text-muted: #8b90b8;
|
||||
--accent: #7c6ff7;
|
||||
--accent-hover: #9d97ff;
|
||||
--danger: #e05a6a;
|
||||
--danger-hover: #f07080;
|
||||
--success: #56c489;
|
||||
--warning: #f0b429;
|
||||
--radius: 8px;
|
||||
}
|
||||
|
||||
*,
|
||||
*::before,
|
||||
*::after {
|
||||
box-sizing: border-box;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
body {
|
||||
font-family:
|
||||
"Inter",
|
||||
system-ui,
|
||||
-apple-system,
|
||||
sans-serif;
|
||||
background: var(--bg);
|
||||
color: var(--text);
|
||||
min-height: 100vh;
|
||||
line-height: 1.6;
|
||||
}
|
||||
|
||||
/* ─── Nav ──────────────────────────────────────────────── */
|
||||
header {
|
||||
background: var(--surface);
|
||||
border-bottom: 1px solid var(--border);
|
||||
padding: 0 1.5rem;
|
||||
position: sticky;
|
||||
top: 0;
|
||||
z-index: 100;
|
||||
}
|
||||
|
||||
nav {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
height: 3.5rem;
|
||||
max-width: 1100px;
|
||||
margin: 0 auto;
|
||||
}
|
||||
|
||||
.brand {
|
||||
color: var(--accent-hover);
|
||||
font-weight: 700;
|
||||
text-decoration: none;
|
||||
font-size: 1.1rem;
|
||||
letter-spacing: -0.02em;
|
||||
}
|
||||
|
||||
.nav-links {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.25rem;
|
||||
}
|
||||
|
||||
.nav-links a {
|
||||
color: var(--text-muted);
|
||||
text-decoration: none;
|
||||
padding: 0.4rem 0.75rem;
|
||||
border-radius: var(--radius);
|
||||
font-size: 0.875rem;
|
||||
font-weight: 500;
|
||||
transition:
|
||||
color 0.15s,
|
||||
background 0.15s;
|
||||
}
|
||||
|
||||
.nav-links a:hover {
|
||||
color: var(--text);
|
||||
background: var(--surface-2);
|
||||
}
|
||||
|
||||
/* Dropdown */
|
||||
.nav-dropdown {
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.nav-dropdown-menu {
|
||||
display: none;
|
||||
position: absolute;
|
||||
top: calc(100% + 0.5rem);
|
||||
left: 0;
|
||||
min-width: 160px;
|
||||
background: var(--surface-2);
|
||||
border: 1px solid var(--border);
|
||||
border-radius: var(--radius);
|
||||
overflow: hidden;
|
||||
z-index: 200;
|
||||
}
|
||||
|
||||
.nav-dropdown:hover .nav-dropdown-menu,
|
||||
.nav-dropdown.open .nav-dropdown-menu {
|
||||
display: block;
|
||||
}
|
||||
|
||||
.nav-dropdown-menu a {
|
||||
display: block;
|
||||
padding: 0.55rem 1rem;
|
||||
border-radius: 0;
|
||||
}
|
||||
|
||||
/* Hamburger */
|
||||
.hamburger {
|
||||
display: none;
|
||||
flex-direction: column;
|
||||
gap: 5px;
|
||||
cursor: pointer;
|
||||
padding: 0.5rem;
|
||||
background: none;
|
||||
border: none;
|
||||
}
|
||||
|
||||
.hamburger span {
|
||||
display: block;
|
||||
width: 22px;
|
||||
height: 2px;
|
||||
background: var(--text-muted);
|
||||
border-radius: 2px;
|
||||
transition:
|
||||
transform 0.2s,
|
||||
opacity 0.2s;
|
||||
}
|
||||
|
||||
/* ─── Main layout ──────────────────────────────────────── */
|
||||
main {
|
||||
max-width: 1200px;
|
||||
margin: 2rem auto;
|
||||
padding: 0 1rem;
|
||||
}
|
||||
|
||||
main:has(.admin-page) {
|
||||
max-width: 1200px;
|
||||
}
|
||||
|
||||
/* ─── Alerts ───────────────────────────────────────────── */
|
||||
.alert {
|
||||
padding: 0.75rem 1rem;
|
||||
border-radius: var(--radius);
|
||||
margin-bottom: 1rem;
|
||||
font-size: 0.875rem;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.alert-success {
|
||||
background: rgba(86, 196, 137, 0.15);
|
||||
color: var(--success);
|
||||
border: 1px solid rgba(86, 196, 137, 0.3);
|
||||
}
|
||||
|
||||
.alert-error {
|
||||
background: rgba(224, 90, 106, 0.15);
|
||||
color: var(--danger);
|
||||
border: 1px solid rgba(224, 90, 106, 0.3);
|
||||
}
|
||||
|
||||
/* ─── Card ─────────────────────────────────────────────── */
|
||||
.card {
|
||||
background: var(--surface);
|
||||
border: 1px solid var(--border);
|
||||
border-radius: 12px;
|
||||
padding: 2rem;
|
||||
}
|
||||
|
||||
.card h1 {
|
||||
font-size: 1.4rem;
|
||||
font-weight: 700;
|
||||
margin-bottom: 1.5rem;
|
||||
letter-spacing: -0.02em;
|
||||
}
|
||||
|
||||
.card h2 {
|
||||
font-size: 1.1rem;
|
||||
font-weight: 600;
|
||||
margin-bottom: 1rem;
|
||||
}
|
||||
|
||||
/* ─── Forms ────────────────────────────────────────────── */
|
||||
form label {
|
||||
display: block;
|
||||
font-size: 0.8rem;
|
||||
font-weight: 600;
|
||||
color: var(--text-muted);
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.04em;
|
||||
margin-bottom: 0.35rem;
|
||||
margin-top: 1.1rem;
|
||||
}
|
||||
|
||||
form input,
|
||||
form select,
|
||||
form textarea {
|
||||
width: 100%;
|
||||
padding: 0.6rem 0.85rem;
|
||||
background: var(--surface-2);
|
||||
border: 1px solid var(--border);
|
||||
border-radius: var(--radius);
|
||||
font-size: 0.95rem;
|
||||
font-family: inherit;
|
||||
color: var(--text);
|
||||
transition:
|
||||
border-color 0.15s,
|
||||
box-shadow 0.15s;
|
||||
}
|
||||
|
||||
form input::placeholder,
|
||||
form textarea::placeholder {
|
||||
color: var(--text-muted);
|
||||
}
|
||||
|
||||
form select option {
|
||||
background: var(--surface-2);
|
||||
}
|
||||
|
||||
form input:focus,
|
||||
form select:focus,
|
||||
form textarea:focus {
|
||||
outline: none;
|
||||
border-color: var(--accent);
|
||||
box-shadow: 0 0 0 3px rgba(124, 111, 247, 0.25);
|
||||
}
|
||||
|
||||
/* ─── Buttons ──────────────────────────────────────────── */
|
||||
.btn,
|
||||
button[type="submit"] {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 0.4rem;
|
||||
margin-top: 1.25rem;
|
||||
padding: 0.6rem 1.4rem;
|
||||
background: var(--accent);
|
||||
color: #fff;
|
||||
border: none;
|
||||
border-radius: var(--radius);
|
||||
font-size: 0.9rem;
|
||||
font-weight: 600;
|
||||
font-family: inherit;
|
||||
cursor: pointer;
|
||||
text-decoration: none;
|
||||
transition:
|
||||
background 0.15s,
|
||||
transform 0.1s;
|
||||
}
|
||||
|
||||
.btn:hover,
|
||||
button[type="submit"]:hover {
|
||||
background: var(--accent-hover);
|
||||
}
|
||||
|
||||
.btn:active,
|
||||
button[type="submit"]:active {
|
||||
transform: scale(0.98);
|
||||
}
|
||||
|
||||
.btn-danger {
|
||||
background: var(--danger);
|
||||
}
|
||||
|
||||
.btn-danger:hover {
|
||||
background: var(--danger-hover);
|
||||
}
|
||||
|
||||
.btn-sm {
|
||||
padding: 0.3rem 0.75rem;
|
||||
font-size: 0.8rem;
|
||||
margin-top: 0;
|
||||
}
|
||||
|
||||
/* ─── Tabs ─────────────────────────────────────────────── */
|
||||
.tabs {
|
||||
display: flex;
|
||||
gap: 0.5rem;
|
||||
margin-bottom: 1.5rem;
|
||||
border-bottom: 1px solid var(--border);
|
||||
padding-bottom: 0;
|
||||
}
|
||||
|
||||
.tab-btn {
|
||||
background: none;
|
||||
border: none;
|
||||
border-bottom: 2px solid transparent;
|
||||
padding: 0.5rem 1rem;
|
||||
margin-bottom: -1px;
|
||||
color: var(--text-muted);
|
||||
font-size: 0.9rem;
|
||||
font-weight: 500;
|
||||
cursor: pointer;
|
||||
margin-top: 0;
|
||||
transition:
|
||||
color 0.15s,
|
||||
border-color 0.15s;
|
||||
}
|
||||
|
||||
.tab-btn:hover {
|
||||
color: var(--text);
|
||||
}
|
||||
|
||||
.tab-btn.active {
|
||||
color: var(--accent-hover);
|
||||
border-bottom-color: var(--accent);
|
||||
}
|
||||
|
||||
.tab-panel {
|
||||
display: none;
|
||||
}
|
||||
.tab-panel.active {
|
||||
display: block;
|
||||
}
|
||||
|
||||
/* ─── Result ───────────────────────────────────────────── */
|
||||
.result {
|
||||
margin-top: 1.75rem;
|
||||
padding-top: 1.5rem;
|
||||
border-top: 1px solid var(--border);
|
||||
}
|
||||
|
||||
.result h2 {
|
||||
font-size: 1rem;
|
||||
font-weight: 600;
|
||||
color: var(--text-muted);
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.05em;
|
||||
margin-bottom: 0.75rem;
|
||||
}
|
||||
|
||||
pre {
|
||||
background: var(--surface-2);
|
||||
border: 1px solid var(--border);
|
||||
padding: 1rem;
|
||||
border-radius: var(--radius);
|
||||
white-space: pre-wrap;
|
||||
word-break: break-word;
|
||||
font-size: 0.9rem;
|
||||
line-height: 1.7;
|
||||
color: var(--text);
|
||||
}
|
||||
|
||||
.generated-image {
|
||||
max-width: 100%;
|
||||
border-radius: var(--radius);
|
||||
margin-top: 0.5rem;
|
||||
border: 1px solid var(--border);
|
||||
}
|
||||
|
||||
.generated-video {
|
||||
max-width: 100%;
|
||||
border-radius: var(--radius);
|
||||
margin-top: 0.5rem;
|
||||
}
|
||||
|
||||
.image-upload-preview {
|
||||
margin-top: 0.75rem;
|
||||
}
|
||||
|
||||
.image-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fill, minmax(160px, 1fr));
|
||||
gap: 1rem;
|
||||
margin-top: 0.75rem;
|
||||
}
|
||||
|
||||
.image-grid-item {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
.image-grid-item .generated-image {
|
||||
width: 100%;
|
||||
aspect-ratio: 1 / 1;
|
||||
object-fit: cover;
|
||||
}
|
||||
|
||||
/* ─── Admin table ──────────────────────────────────────── */
|
||||
.stats-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fit, minmax(140px, 1fr));
|
||||
gap: 1rem;
|
||||
margin-bottom: 2rem;
|
||||
}
|
||||
|
||||
.stat-box {
|
||||
background: var(--surface-2);
|
||||
border: 1px solid var(--border);
|
||||
border-radius: var(--radius);
|
||||
padding: 1rem 1.25rem;
|
||||
}
|
||||
|
||||
.stat-box .stat-label {
|
||||
font-size: 0.75rem;
|
||||
font-weight: 600;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.05em;
|
||||
color: var(--text-muted);
|
||||
}
|
||||
|
||||
.stat-box .stat-value {
|
||||
font-size: 2rem;
|
||||
font-weight: 700;
|
||||
color: var(--accent-hover);
|
||||
line-height: 1.2;
|
||||
margin-top: 0.25rem;
|
||||
}
|
||||
|
||||
.table-wrap {
|
||||
overflow-x: auto;
|
||||
}
|
||||
|
||||
table {
|
||||
width: 100%;
|
||||
border-collapse: collapse;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
th {
|
||||
text-align: left;
|
||||
padding: 0.5rem 0.75rem;
|
||||
font-size: 0.75rem;
|
||||
font-weight: 600;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.05em;
|
||||
color: var(--text-muted);
|
||||
border-bottom: 1px solid var(--border);
|
||||
}
|
||||
|
||||
td {
|
||||
padding: 0.65rem 0.75rem;
|
||||
border-bottom: 1px solid var(--border);
|
||||
vertical-align: middle;
|
||||
}
|
||||
|
||||
tr:last-child td {
|
||||
border-bottom: none;
|
||||
}
|
||||
|
||||
.role-badge {
|
||||
display: inline-block;
|
||||
padding: 0.2rem 0.6rem;
|
||||
border-radius: 999px;
|
||||
font-size: 0.75rem;
|
||||
font-weight: 600;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.04em;
|
||||
}
|
||||
|
||||
.role-admin {
|
||||
background: rgba(124, 111, 247, 0.2);
|
||||
color: var(--accent-hover);
|
||||
}
|
||||
|
||||
.role-user {
|
||||
background: rgba(139, 144, 184, 0.15);
|
||||
color: var(--text-muted);
|
||||
}
|
||||
|
||||
.table-actions {
|
||||
display: flex;
|
||||
gap: 0.5rem;
|
||||
align-items: center;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
/* ─── Loading overlay ──────────────────────────────────── */
|
||||
#loading-overlay {
|
||||
display: none;
|
||||
position: fixed;
|
||||
inset: 0;
|
||||
background: rgba(15, 17, 23, 0.75);
|
||||
z-index: 1000;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
flex-direction: column;
|
||||
gap: 1rem;
|
||||
}
|
||||
|
||||
#loading-overlay.active {
|
||||
display: flex;
|
||||
}
|
||||
|
||||
.spinner {
|
||||
width: 40px;
|
||||
height: 40px;
|
||||
border: 3px solid var(--border);
|
||||
border-top-color: var(--accent);
|
||||
border-radius: 50%;
|
||||
animation: spin 0.7s linear infinite;
|
||||
}
|
||||
|
||||
.spinner-label {
|
||||
color: var(--text-muted);
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
@keyframes spin {
|
||||
to {
|
||||
transform: rotate(360deg);
|
||||
}
|
||||
}
|
||||
|
||||
/* ─── Misc ─────────────────────────────────────────────── */
|
||||
.text-muted {
|
||||
color: var(--text-muted);
|
||||
}
|
||||
.mt-1 {
|
||||
margin-top: 0.5rem;
|
||||
}
|
||||
.mt-2 {
|
||||
margin-top: 1rem;
|
||||
}
|
||||
.section-title {
|
||||
font-size: 1rem;
|
||||
font-weight: 600;
|
||||
color: var(--text-muted);
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.05em;
|
||||
margin-bottom: 1rem;
|
||||
}
|
||||
|
||||
/* ─── Responsive ───────────────────────────────────────── */
|
||||
@media (max-width: 640px) {
|
||||
.hamburger {
|
||||
display: flex;
|
||||
}
|
||||
|
||||
.nav-links {
|
||||
display: none;
|
||||
flex-direction: column;
|
||||
align-items: flex-start;
|
||||
position: absolute;
|
||||
top: 3.5rem;
|
||||
left: 0;
|
||||
right: 0;
|
||||
background: var(--surface);
|
||||
border-bottom: 1px solid var(--border);
|
||||
padding: 0.75rem 1rem;
|
||||
gap: 0.1rem;
|
||||
z-index: 99;
|
||||
}
|
||||
|
||||
.nav-links.open {
|
||||
display: flex;
|
||||
}
|
||||
|
||||
.nav-dropdown-menu {
|
||||
position: static;
|
||||
border: none;
|
||||
background: none;
|
||||
padding-left: 0.75rem;
|
||||
}
|
||||
|
||||
.card {
|
||||
padding: 1.25rem;
|
||||
}
|
||||
|
||||
.stats-grid {
|
||||
grid-template-columns: repeat(2, 1fr);
|
||||
}
|
||||
}
|
||||
|
||||
nav {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
height: 3.5rem;
|
||||
}
|
||||
.brand {
|
||||
color: #e0e0ff;
|
||||
font-weight: 700;
|
||||
text-decoration: none;
|
||||
font-size: 1.1rem;
|
||||
}
|
||||
.nav-links a {
|
||||
color: #c0c0dd;
|
||||
text-decoration: none;
|
||||
margin-left: 1.25rem;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
.nav-links a:hover {
|
||||
color: #fff;
|
||||
}
|
||||
|
||||
/* Main */
|
||||
main {
|
||||
max-width: 640px;
|
||||
margin: 2rem auto;
|
||||
padding: 0 1rem;
|
||||
}
|
||||
|
||||
/* Alerts */
|
||||
.alert {
|
||||
padding: 0.75rem 1rem;
|
||||
border-radius: 6px;
|
||||
margin-bottom: 1rem;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
.alert-success {
|
||||
background: #d4edda;
|
||||
color: #155724;
|
||||
}
|
||||
.alert-error {
|
||||
background: #f8d7da;
|
||||
color: #721c24;
|
||||
}
|
||||
|
||||
/* Card */
|
||||
.card {
|
||||
background: rgba(255, 255, 255, 0.08);
|
||||
border-radius: 10px;
|
||||
padding: 2rem;
|
||||
box-shadow: 0 1px 4px rgba(0, 0, 0, 0.08);
|
||||
}
|
||||
.card h1 {
|
||||
font-size: 1.5rem;
|
||||
margin-bottom: 1.5rem;
|
||||
}
|
||||
|
||||
/* Forms */
|
||||
form label {
|
||||
display: block;
|
||||
font-size: 0.85rem;
|
||||
font-weight: 600;
|
||||
margin-bottom: 0.3rem;
|
||||
margin-top: 1rem;
|
||||
}
|
||||
form input,
|
||||
form select,
|
||||
form textarea {
|
||||
width: 100%;
|
||||
padding: 0.55rem 0.75rem;
|
||||
border: 1px solid #ccc;
|
||||
border-radius: 6px;
|
||||
font-size: 0.95rem;
|
||||
font-family: inherit;
|
||||
}
|
||||
form input:focus,
|
||||
form select:focus,
|
||||
form textarea:focus {
|
||||
outline: none;
|
||||
border-color: #5c6bc0;
|
||||
box-shadow: 0 0 0 2px rgba(92, 107, 192, 0.2);
|
||||
}
|
||||
button[type="submit"],
|
||||
.btn {
|
||||
display: inline-block;
|
||||
margin-top: 1.25rem;
|
||||
padding: 0.6rem 1.4rem;
|
||||
background: #5c6bc0;
|
||||
color: #fff;
|
||||
border: none;
|
||||
border-radius: 6px;
|
||||
font-size: 0.95rem;
|
||||
cursor: pointer;
|
||||
text-decoration: none;
|
||||
}
|
||||
button[type="submit"]:hover,
|
||||
.btn:hover {
|
||||
background: #3f51b5;
|
||||
}
|
||||
|
||||
/* Result */
|
||||
.result {
|
||||
margin-top: 1.5rem;
|
||||
padding-top: 1.5rem;
|
||||
border-top: 1px solid #eee;
|
||||
}
|
||||
.result h2 {
|
||||
margin-bottom: 0.75rem;
|
||||
font-size: 1.1rem;
|
||||
}
|
||||
pre {
|
||||
background: #f0f0f0;
|
||||
padding: 1rem;
|
||||
border-radius: 6px;
|
||||
white-space: pre-wrap;
|
||||
word-break: break-word;
|
||||
}
|
||||
.generated-image {
|
||||
max-width: 100%;
|
||||
border-radius: 8px;
|
||||
margin-top: 0.5rem;
|
||||
}
|
||||
.generated-video {
|
||||
max-width: 100%;
|
||||
border-radius: 8px;
|
||||
margin-top: 0.5rem;
|
||||
}
|
||||
|
||||
/* ─── Chat interface ─────────────────────────────────────────────────────── */
|
||||
.chat-page {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
height: calc(100vh - 100px);
|
||||
max-height: 900px;
|
||||
}
|
||||
|
||||
.chat-header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
margin-bottom: 0.75rem;
|
||||
}
|
||||
|
||||
.chat-config {
|
||||
border: 1px solid var(--border, #ddd);
|
||||
border-radius: 6px;
|
||||
padding: 0.5rem 0.75rem;
|
||||
margin-bottom: 0.75rem;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.chat-config summary {
|
||||
cursor: pointer;
|
||||
font-weight: 500;
|
||||
user-select: none;
|
||||
}
|
||||
|
||||
.chat-config-body {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 0.4rem;
|
||||
margin-top: 0.5rem;
|
||||
}
|
||||
|
||||
.chat-history {
|
||||
flex: 1;
|
||||
overflow-y: auto;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 0.75rem;
|
||||
padding: 0.5rem 0;
|
||||
border-top: 1px solid var(--border, #ddd);
|
||||
border-bottom: 1px solid var(--border, #ddd);
|
||||
margin-bottom: 0.75rem;
|
||||
}
|
||||
|
||||
.chat-empty {
|
||||
color: var(--text-muted, #888);
|
||||
text-align: center;
|
||||
margin: auto;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.chat-bubble {
|
||||
max-width: 80%;
|
||||
padding: 0.6rem 0.9rem;
|
||||
border-radius: 12px;
|
||||
font-size: 0.9rem;
|
||||
line-height: 1.5;
|
||||
}
|
||||
|
||||
.chat-bubble--user {
|
||||
align-self: flex-end;
|
||||
background: var(--accent, #7c6ff7);
|
||||
color: #fff;
|
||||
border-bottom-right-radius: 3px;
|
||||
}
|
||||
|
||||
.chat-bubble--assistant {
|
||||
align-self: flex-start;
|
||||
background: var(--surface-2, #f0f0f0);
|
||||
color: var(--text, #222);
|
||||
border-bottom-left-radius: 3px;
|
||||
}
|
||||
|
||||
.bubble-role {
|
||||
display: block;
|
||||
font-size: 0.7rem;
|
||||
font-weight: 600;
|
||||
text-transform: uppercase;
|
||||
opacity: 0.7;
|
||||
margin-bottom: 0.25rem;
|
||||
}
|
||||
|
||||
.bubble-content {
|
||||
white-space: pre-wrap;
|
||||
word-break: break-word;
|
||||
}
|
||||
|
||||
.bubble-meta {
|
||||
display: block;
|
||||
font-size: 0.7rem;
|
||||
opacity: 0.6;
|
||||
margin-top: 0.3rem;
|
||||
text-align: right;
|
||||
}
|
||||
|
||||
.chat-input-row {
|
||||
display: flex;
|
||||
gap: 0.5rem;
|
||||
align-items: flex-end;
|
||||
}
|
||||
|
||||
.chat-input-textarea {
|
||||
flex: 1;
|
||||
resize: none;
|
||||
border-radius: 8px;
|
||||
padding: 0.5rem 0.75rem;
|
||||
font-size: 0.95rem;
|
||||
min-height: 2.5rem;
|
||||
max-height: 8rem;
|
||||
}
|
||||
|
||||
.btn-sm {
|
||||
padding: 0.3rem 0.7rem;
|
||||
font-size: 0.8rem;
|
||||
}
|
||||
@@ -0,0 +1,279 @@
|
||||
{% extends "base.html" %} {% block title %}Admin — All You Can GET AI{% endblock
|
||||
%} {% block content %}
|
||||
<div class="card admin-page">
|
||||
<h1>Admin Dashboard</h1>
|
||||
|
||||
{% if stats %}
|
||||
<div class="stats-grid">
|
||||
<div class="stat-box">
|
||||
<div class="stat-label">Total users</div>
|
||||
<div class="stat-value">{{ stats.get('total_users', 0) }}</div>
|
||||
</div>
|
||||
<div class="stat-box">
|
||||
<div class="stat-label">Active tokens</div>
|
||||
<div class="stat-value">{{ stats.get('active_refresh_tokens', 0) }}</div>
|
||||
</div>
|
||||
<div class="stat-box">
|
||||
<div class="stat-label">Admins</div>
|
||||
<div class="stat-value">{{ stats.get('admin_users', 0) }}</div>
|
||||
</div>
|
||||
</div>
|
||||
{% endif %}
|
||||
|
||||
<h2 class="section-title">Users</h2>
|
||||
<div class="table-wrap">
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Email</th>
|
||||
<th>Role</th>
|
||||
<th>Actions</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{% for u in users %}
|
||||
<tr>
|
||||
<td>{{ u.email }}</td>
|
||||
<td>
|
||||
<span class="role-badge role-{{ u.role }}">{{ u.role }}</span>
|
||||
</td>
|
||||
<td>
|
||||
<div class="table-actions">
|
||||
<!-- Role toggle -->
|
||||
<form
|
||||
method="post"
|
||||
action="{{ url_for('admin_set_role', user_id=u.id) }}"
|
||||
>
|
||||
<input
|
||||
type="hidden"
|
||||
name="role"
|
||||
value="{{ 'user' if u.role == 'admin' else 'admin' }}"
|
||||
/>
|
||||
<button type="submit" class="btn btn-sm">
|
||||
Make {{ 'user' if u.role == 'admin' else 'admin' }}
|
||||
</button>
|
||||
</form>
|
||||
<!-- Delete -->
|
||||
{% if u.id != session.get('user_id') %}
|
||||
<form
|
||||
method="post"
|
||||
action="{{ url_for('admin_delete_user', user_id=u.id) }}"
|
||||
onsubmit="return confirm('Delete {{ u.email }}?')"
|
||||
>
|
||||
<button type="submit" class="btn btn-sm btn-danger">
|
||||
Delete
|
||||
</button>
|
||||
</form>
|
||||
{% endif %}
|
||||
</div>
|
||||
</td>
|
||||
</tr>
|
||||
{% else %}
|
||||
<tr>
|
||||
<td colspan="3" class="text-muted">No users found.</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
|
||||
<!-- ── Video Jobs ──────────────────────────────────────────────── -->
|
||||
<h2 class="section-title" style="margin-top: 2rem">Video Jobs</h2>
|
||||
|
||||
<div
|
||||
style="
|
||||
display: flex;
|
||||
gap: 1rem;
|
||||
align-items: center;
|
||||
flex-wrap: wrap;
|
||||
margin-bottom: 1rem;
|
||||
"
|
||||
>
|
||||
<label for="vj-status-filter" style="font-weight: 600"
|
||||
>Filter by status:</label
|
||||
>
|
||||
<select id="vj-status-filter" class="form-control" style="width: auto">
|
||||
<option value="">All</option>
|
||||
<option value="queued">Queued</option>
|
||||
<option value="processing">Processing</option>
|
||||
<option value="completed">Completed</option>
|
||||
<option value="failed">Failed</option>
|
||||
<option value="cancelled">Cancelled</option>
|
||||
</select>
|
||||
<label for="vj-sort" style="font-weight: 600">Sort:</label>
|
||||
<select id="vj-sort" class="form-control" style="width: auto">
|
||||
<option value="created_desc">Created (newest first)</option>
|
||||
<option value="created_asc">Created (oldest first)</option>
|
||||
<option value="updated_desc">Updated (newest first)</option>
|
||||
<option value="status_asc">Status (A–Z)</option>
|
||||
<option value="model_asc">Model (A–Z)</option>
|
||||
</select>
|
||||
<button id="vj-refresh" class="btn btn-sm">Refresh</button>
|
||||
<span
|
||||
id="vj-count"
|
||||
style="color: var(--text-muted, #888); font-size: 0.9em"
|
||||
></span>
|
||||
</div>
|
||||
|
||||
<div class="table-wrap">
|
||||
<table id="vj-table">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>User</th>
|
||||
<th>Status</th>
|
||||
<th>Model</th>
|
||||
<th>Prompt</th>
|
||||
<th>Created</th>
|
||||
<th>Updated</th>
|
||||
<th>Actions</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody id="vj-tbody">
|
||||
<tr>
|
||||
<td colspan="7" class="text-muted">Loading…</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
(function () {
|
||||
let allJobs = [];
|
||||
|
||||
async function loadJobs() {
|
||||
document.getElementById("vj-tbody").innerHTML =
|
||||
'<tr><td colspan="7" class="text-muted">Loading…</td></tr>';
|
||||
try {
|
||||
const r = await fetch("/api/admin/videos");
|
||||
if (!r.ok) throw new Error(await r.text());
|
||||
allJobs = await r.json();
|
||||
renderJobs();
|
||||
} catch (e) {
|
||||
document.getElementById("vj-tbody").innerHTML =
|
||||
`<tr><td colspan="7" style="color:red;">Error: ${e.message}</td></tr>`;
|
||||
}
|
||||
}
|
||||
|
||||
function renderJobs() {
|
||||
const statusFilter = document.getElementById("vj-status-filter").value;
|
||||
const sort = document.getElementById("vj-sort").value;
|
||||
|
||||
let jobs = statusFilter
|
||||
? allJobs.filter((j) => j.status === statusFilter)
|
||||
: [...allJobs];
|
||||
|
||||
jobs.sort((a, b) => {
|
||||
if (sort === "created_asc")
|
||||
return new Date(a.created_at) - new Date(b.created_at);
|
||||
if (sort === "updated_desc")
|
||||
return new Date(b.updated_at) - new Date(a.updated_at);
|
||||
if (sort === "status_asc") return a.status.localeCompare(b.status);
|
||||
if (sort === "model_asc") return a.model_id.localeCompare(b.model_id);
|
||||
return new Date(b.created_at) - new Date(a.created_at); // created_desc default
|
||||
});
|
||||
|
||||
document.getElementById("vj-count").textContent =
|
||||
`${jobs.length} job${jobs.length !== 1 ? "s" : ""}`;
|
||||
|
||||
const tbody = document.getElementById("vj-tbody");
|
||||
if (jobs.length === 0) {
|
||||
tbody.innerHTML =
|
||||
'<tr><td colspan="7" class="text-muted">No jobs found.</td></tr>';
|
||||
return;
|
||||
}
|
||||
|
||||
const statusColor = {
|
||||
completed: "color:var(--success-color,#4caf50)",
|
||||
failed: "color:var(--danger-color,#e53935)",
|
||||
cancelled: "color:var(--danger-color,#e53935)",
|
||||
processing: "color:var(--warning-color,#fb8c00)",
|
||||
queued: "color:var(--warning-color,#fb8c00)",
|
||||
};
|
||||
|
||||
tbody.innerHTML = jobs
|
||||
.map((job) => {
|
||||
const sc = statusColor[job.status] || "";
|
||||
const canRetry =
|
||||
job.status === "failed" || job.status === "cancelled";
|
||||
const canCancel =
|
||||
job.status === "queued" || job.status === "processing";
|
||||
const actions = [
|
||||
canRetry
|
||||
? `<button class="btn btn-sm vj-retry" data-id="${job.id}">Retry</button>`
|
||||
: "",
|
||||
canCancel
|
||||
? `<button class="btn btn-sm vj-cancel" data-id="${job.id}">Cancel</button>`
|
||||
: "",
|
||||
`<button class="btn btn-sm btn-danger vj-delete" data-id="${job.id}">Delete</button>`,
|
||||
].join(" ");
|
||||
const prompt =
|
||||
job.prompt.length > 60 ? job.prompt.slice(0, 57) + "…" : job.prompt;
|
||||
const created = job.created_at
|
||||
? new Date(job.created_at).toLocaleString()
|
||||
: "—";
|
||||
const updated = job.updated_at
|
||||
? new Date(job.updated_at).toLocaleString()
|
||||
: "—";
|
||||
return `<tr>
|
||||
<td>${job.user_email || "—"}</td>
|
||||
<td style="${sc};font-weight:600;">${job.status}</td>
|
||||
<td style="font-size:.85em;">${job.model_id}</td>
|
||||
<td title="${job.prompt.replace(/"/g, """)}">${prompt}</td>
|
||||
<td style="white-space:nowrap;">${created}</td>
|
||||
<td style="white-space:nowrap;">${updated}</td>
|
||||
<td style="white-space:nowrap;">${actions}</td>
|
||||
</tr>`;
|
||||
})
|
||||
.join("");
|
||||
}
|
||||
|
||||
async function apiPost(path) {
|
||||
const r = await fetch(path, { method: "POST" });
|
||||
if (!r.ok) {
|
||||
const d = await r.json().catch(() => ({}));
|
||||
throw new Error(d.detail || r.statusText);
|
||||
}
|
||||
return r.json();
|
||||
}
|
||||
|
||||
async function apiDelete(path) {
|
||||
const r = await fetch(path, { method: "DELETE" });
|
||||
if (!r.ok) {
|
||||
const d = await r.json().catch(() => ({}));
|
||||
throw new Error(d.detail || r.statusText);
|
||||
}
|
||||
return r.json();
|
||||
}
|
||||
|
||||
document
|
||||
.getElementById("vj-tbody")
|
||||
.addEventListener("click", async function (e) {
|
||||
const btn = e.target.closest("button");
|
||||
if (!btn) return;
|
||||
const id = btn.dataset.id;
|
||||
try {
|
||||
if (btn.classList.contains("vj-retry"))
|
||||
await apiPost(`/api/admin/videos/${id}/retry`);
|
||||
if (btn.classList.contains("vj-cancel"))
|
||||
await apiPost(`/api/admin/videos/${id}/cancel`);
|
||||
if (btn.classList.contains("vj-delete")) {
|
||||
if (!confirm("Permanently delete this video job?")) return;
|
||||
await apiDelete(`/api/admin/videos/${id}`);
|
||||
}
|
||||
await loadJobs();
|
||||
} catch (err) {
|
||||
alert("Error: " + err.message);
|
||||
}
|
||||
});
|
||||
|
||||
document
|
||||
.getElementById("vj-status-filter")
|
||||
.addEventListener("change", renderJobs);
|
||||
document.getElementById("vj-sort").addEventListener("change", renderJobs);
|
||||
document.getElementById("vj-refresh").addEventListener("click", loadJobs);
|
||||
|
||||
loadJobs();
|
||||
})();
|
||||
</script>
|
||||
{% endblock %}
|
||||
@@ -0,0 +1,154 @@
|
||||
{% extends "base.html" %} {% block title %}Admin - Model Management{% endblock
|
||||
%} {% block content %}
|
||||
<div class="container mx-auto px-4 py-8">
|
||||
<h1 class="text-3xl font-bold mb-6">Admin: Model Management</h1>
|
||||
|
||||
<!-- Cache Status -->
|
||||
<div class="bg-gray-800 p-4 rounded-lg shadow-md mb-6">
|
||||
<h2 class="text-xl font-semibold mb-2">Cache Status</h2>
|
||||
<div id="cache-status" class="grid grid-cols-2 gap-4">
|
||||
<p>
|
||||
<strong>Last Updated:</strong> <span id="last-updated">Loading...</span>
|
||||
</p>
|
||||
<p>
|
||||
<strong>Model Count:</strong> <span id="model-count">Loading...</span>
|
||||
</p>
|
||||
</div>
|
||||
<button
|
||||
id="refresh-button"
|
||||
class="mt-4 bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded"
|
||||
>
|
||||
Refresh Cache
|
||||
</button>
|
||||
<p id="refresh-status" class="mt-2 text-sm"></p>
|
||||
</div>
|
||||
|
||||
<!-- Model List -->
|
||||
<div class="bg-gray-800 p-4 rounded-lg shadow-md">
|
||||
<h2 class="text-xl font-semibold mb-2">Available Models</h2>
|
||||
<table id="models-table" class="min-w-full divide-y divide-gray-700">
|
||||
<thead class="bg-gray-700">
|
||||
<tr>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-6 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
Name
|
||||
</th>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-6 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
ID
|
||||
</th>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-6 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
Modality
|
||||
</th>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-6 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
Context Length
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody
|
||||
id="models-table-body"
|
||||
class="bg-gray-800 divide-y divide-gray-700"
|
||||
>
|
||||
<!-- Data will be populated by JavaScript -->
|
||||
<tr>
|
||||
<td colspan="4" class="text-center py-4">Loading models...</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
document.addEventListener("DOMContentLoaded", function () {
|
||||
const lastUpdatedEl = document.getElementById("last-updated");
|
||||
const modelCountEl = document.getElementById("model-count");
|
||||
const modelsTableBody = document.getElementById("models-table-body");
|
||||
const refreshButton = document.getElementById("refresh-button");
|
||||
const refreshStatus = document.getElementById("refresh-status");
|
||||
|
||||
async function fetchCacheStatus() {
|
||||
try {
|
||||
const response = await fetch("/api/v1/admin/models/status");
|
||||
if (!response.ok) throw new Error("Failed to fetch status");
|
||||
const data = await response.json();
|
||||
lastUpdatedEl.textContent = data.last_updated
|
||||
? new Date(data.last_updated).toLocaleString()
|
||||
: "Never";
|
||||
modelCountEl.textContent = data.model_count;
|
||||
} catch (error) {
|
||||
lastUpdatedEl.textContent = "Error";
|
||||
modelCountEl.textContent = "Error";
|
||||
console.error("Error fetching cache status:", error);
|
||||
}
|
||||
}
|
||||
|
||||
async function fetchModels() {
|
||||
try {
|
||||
const response = await fetch("/api/v1/admin/models");
|
||||
if (!response.ok) throw new Error("Failed to fetch models");
|
||||
const models = await response.json();
|
||||
modelsTableBody.innerHTML = ""; // Clear loading message
|
||||
if (models.length === 0) {
|
||||
modelsTableBody.innerHTML =
|
||||
'<tr><td colspan="4" class="text-center py-4">No models found in cache.</td></tr>';
|
||||
} else {
|
||||
models.forEach((model) => {
|
||||
const row = `
|
||||
<tr>
|
||||
<td class="px-6 py-4 whitespace-nowrap">${model.name}</td>
|
||||
<td class="px-6 py-4 whitespace-nowrap font-mono text-sm">${model.id}</td>
|
||||
<td class="px-6 py-4 whitespace-nowrap">${model.modality}</td>
|
||||
<td class="px-6 py-4 whitespace-nowrap">${model.context_length || "N/A"}</td>
|
||||
</tr>
|
||||
`;
|
||||
modelsTableBody.innerHTML += row;
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
modelsTableBody.innerHTML =
|
||||
'<tr><td colspan="4" class="text-center py-4 text-red-500">Error loading models.</td></tr>';
|
||||
console.error("Error fetching models:", error);
|
||||
}
|
||||
}
|
||||
|
||||
async function refreshCache() {
|
||||
refreshButton.disabled = true;
|
||||
refreshStatus.textContent = "Refreshing...";
|
||||
refreshStatus.classList.remove("text-red-500", "text-green-500");
|
||||
|
||||
try {
|
||||
const response = await fetch("/api/v1/admin/models/refresh", {
|
||||
method: "POST",
|
||||
});
|
||||
const data = await response.json();
|
||||
if (!response.ok) {
|
||||
throw new Error(data.detail || "Failed to refresh cache");
|
||||
}
|
||||
refreshStatus.textContent = `Successfully refreshed ${data.refreshed} models. Total: ${data.total_models}.`;
|
||||
refreshStatus.classList.add("text-green-500");
|
||||
fetchCacheStatus();
|
||||
fetchModels();
|
||||
} catch (error) {
|
||||
refreshStatus.textContent = `Error: ${error.message}`;
|
||||
refreshStatus.classList.add("text-red-500");
|
||||
} finally {
|
||||
refreshButton.disabled = false;
|
||||
}
|
||||
}
|
||||
|
||||
fetchCacheStatus();
|
||||
fetchModels();
|
||||
refreshButton.addEventListener("click", refreshCache);
|
||||
});
|
||||
</script>
|
||||
{% endblock %}
|
||||
@@ -0,0 +1,182 @@
|
||||
{% extends "base.html" %} {% block title %}Admin - Video Jobs{% endblock %} {%
|
||||
block content %}
|
||||
<div class="container mx-auto px-4 py-8">
|
||||
<h1 class="text-3xl font-bold mb-6">Admin: Video Jobs</h1>
|
||||
|
||||
<!-- Purge Old Jobs -->
|
||||
<div class="bg-gray-800 p-4 rounded-lg shadow-md mb-6">
|
||||
<h2 class="text-xl font-semibold mb-2">Maintenance</h2>
|
||||
<p class="text-gray-400 mb-4">
|
||||
Delete all completed, failed, or cancelled jobs older than 30 days.
|
||||
</p>
|
||||
<button
|
||||
id="purge-button"
|
||||
class="bg-red-500 hover:bg-red-700 text-white font-bold py-2 px-4 rounded"
|
||||
>
|
||||
Purge Old Jobs
|
||||
</button>
|
||||
<p id="purge-status" class="mt-2 text-sm"></p>
|
||||
</div>
|
||||
|
||||
<!-- Video Jobs Table -->
|
||||
<div class="bg-gray-800 p-4 rounded-lg shadow-md overflow-x-auto">
|
||||
<table class="min-w-full divide-y divide-gray-700">
|
||||
<thead class="bg-gray-700">
|
||||
<tr>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-4 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
User
|
||||
</th>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-4 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
Status
|
||||
</th>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-4 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
Model
|
||||
</th>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-4 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
Prompt
|
||||
</th>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-4 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
Created
|
||||
</th>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-4 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
Actions
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody id="jobs-table-body" class="bg-gray-800 divide-y divide-gray-700">
|
||||
<tr>
|
||||
<td colspan="6" class="text-center py-4">Loading jobs...</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
document.addEventListener("DOMContentLoaded", function () {
|
||||
const jobsTableBody = document.getElementById("jobs-table-body");
|
||||
const purgeButton = document.getElementById("purge-button");
|
||||
const purgeStatus = document.getElementById("purge-status");
|
||||
|
||||
async function fetchJobs() {
|
||||
try {
|
||||
const response = await fetch(
|
||||
"{{ config['BACKEND_URL'] }}/admin/videos",
|
||||
{
|
||||
headers: {
|
||||
Authorization: "Bearer {{ session['access_token'] }}",
|
||||
},
|
||||
},
|
||||
);
|
||||
if (!response.ok) throw new Error("Failed to fetch jobs");
|
||||
const jobs = await response.json();
|
||||
jobsTableBody.innerHTML = "";
|
||||
if (jobs.length === 0) {
|
||||
jobsTableBody.innerHTML =
|
||||
'<tr><td colspan="6" class="text-center py-4">No video jobs found.</td></tr>';
|
||||
} else {
|
||||
jobs.forEach((job) => {
|
||||
const statusClass =
|
||||
job.status === "completed"
|
||||
? "text-green-400"
|
||||
: job.status === "failed" || job.status === "cancelled"
|
||||
? "text-red-400"
|
||||
: "text-yellow-400";
|
||||
const cancelBtn =
|
||||
job.status === "queued" || job.status === "processing"
|
||||
? `<button class="cancel-btn text-red-400 hover:text-red-600 text-sm" data-job-id="${job.id}">Cancel</button>`
|
||||
: "";
|
||||
const row = `
|
||||
<tr>
|
||||
<td class="px-4 py-3 whitespace-nowrap text-sm">${job.user_email || "Unknown"}</td>
|
||||
<td class="px-4 py-3 whitespace-nowrap text-sm font-semibold ${statusClass}">${job.status}</td>
|
||||
<td class="px-4 py-3 whitespace-nowrap text-sm">${job.model_id}</td>
|
||||
<td class="px-4 py-3 text-sm truncate max-w-xs">${job.prompt}</td>
|
||||
<td class="px-4 py-3 whitespace-nowrap text-sm">${new Date(job.created_at).toLocaleString()}</td>
|
||||
<td class="px-4 py-3 whitespace-nowrap text-sm">${cancelBtn}</td>
|
||||
</tr>
|
||||
`;
|
||||
jobsTableBody.innerHTML += row;
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
jobsTableBody.innerHTML =
|
||||
'<tr><td colspan="6" class="text-center py-4 text-red-500">Error loading jobs.</td></tr>';
|
||||
console.error("Error fetching jobs:", error);
|
||||
}
|
||||
}
|
||||
|
||||
async function purgeJobs() {
|
||||
purgeButton.disabled = true;
|
||||
purgeStatus.textContent = "Purging...";
|
||||
purgeStatus.classList.remove("text-red-500", "text-green-500");
|
||||
|
||||
try {
|
||||
const response = await fetch(
|
||||
"{{ config['BACKEND_URL'] }}/admin/videos/purge",
|
||||
{
|
||||
method: "POST",
|
||||
headers: {
|
||||
Authorization: "Bearer {{ session['access_token'] }}",
|
||||
},
|
||||
},
|
||||
);
|
||||
const data = await response.json();
|
||||
if (!response.ok)
|
||||
throw new Error(data.detail || "Failed to purge jobs");
|
||||
purgeStatus.textContent = `Purged ${data.deleted} jobs. ${data.remaining} remaining.`;
|
||||
purgeStatus.classList.add("text-green-500");
|
||||
fetchJobs();
|
||||
} catch (error) {
|
||||
purgeStatus.textContent = `Error: ${error.message}`;
|
||||
purgeStatus.classList.add("text-red-500");
|
||||
} finally {
|
||||
purgeButton.disabled = false;
|
||||
}
|
||||
}
|
||||
|
||||
// Cancel button event delegation
|
||||
jobsTableBody.addEventListener("click", async function (e) {
|
||||
if (e.target.classList.contains("cancel-btn")) {
|
||||
const jobId = e.target.dataset.jobId;
|
||||
try {
|
||||
const response = await fetch(
|
||||
`{{ config['BACKEND_URL'] }}/admin/videos/${jobId}/cancel`,
|
||||
{
|
||||
method: "POST",
|
||||
headers: {
|
||||
Authorization: "Bearer {{ session['access_token'] }}",
|
||||
},
|
||||
},
|
||||
);
|
||||
if (!response.ok) throw new Error("Failed to cancel job");
|
||||
fetchJobs();
|
||||
} catch (error) {
|
||||
alert(`Error: ${error.message}`);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
purgeButton.addEventListener("click", purgeJobs);
|
||||
fetchJobs();
|
||||
});
|
||||
</script>
|
||||
{% endblock %}
|
||||
@@ -0,0 +1,58 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>{% block title %}All You Can GET AI{% endblock %}</title>
|
||||
<link
|
||||
rel="stylesheet"
|
||||
href="{{ url_for('static', filename='style.css') }}"
|
||||
/>
|
||||
<script src="https://cdn.tailwindcss.com"></script>
|
||||
</head>
|
||||
<body>
|
||||
<header>
|
||||
<nav>
|
||||
<a href="{{ url_for('index') }}" class="brand">All You Can GET AI</a>
|
||||
|
||||
<button class="hamburger" aria-label="Open menu">
|
||||
<span></span><span></span><span></span>
|
||||
</button>
|
||||
|
||||
<div class="nav-links">
|
||||
{% if session.get('access_token') %}
|
||||
<a href="{{ url_for('dashboard') }}">Dashboard</a>
|
||||
<a href="{{ url_for('gallery') }}">Gallery</a>
|
||||
|
||||
<a href="{{ url_for('generate_text') }}">Generate Text</a>
|
||||
<a href="{{ url_for('generate_image') }}">Generate Image</a>
|
||||
<a href="{{ url_for('generate_video') }}">Generate Video</a>
|
||||
|
||||
<a href="{{ url_for('profile') }}">Profile</a>
|
||||
{% if session.get('user_role') == 'admin' %}
|
||||
<a href="{{ url_for('admin') }}">Admin</a>
|
||||
{% endif %}
|
||||
<a href="{{ url_for('logout') }}">Log out</a>
|
||||
{% else %}
|
||||
<a href="{{ url_for('login') }}">Log in</a>
|
||||
<a href="{{ url_for('register') }}">Register</a>
|
||||
{% endif %}
|
||||
</div>
|
||||
</nav>
|
||||
</header>
|
||||
|
||||
<div id="loading-overlay">
|
||||
<div class="spinner"></div>
|
||||
<span class="spinner-label">Working…</span>
|
||||
</div>
|
||||
|
||||
<main>
|
||||
{% with messages = get_flashed_messages(with_categories=true) %} {% for
|
||||
category, message in messages %}
|
||||
<div class="alert alert-{{ category }}">{{ message }}</div>
|
||||
{% endfor %} {% endwith %} {% block content %}{% endblock %}
|
||||
</main>
|
||||
|
||||
<script src="{{ url_for('static', filename='app.js') }}"></script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -0,0 +1,116 @@
|
||||
{% extends "base.html" %} {% block title %}Dashboard — All You Can GET AI{%
|
||||
endblock %} {% block content %}
|
||||
<div class="card">
|
||||
<h1>Welcome{% if user.get('email') %}, {{ user.email }}{% endif %}</h1>
|
||||
<p>Role: <strong>{{ user.get('role', 'user') }}</strong></p>
|
||||
<a href="{{ url_for('generate') }}" class="btn">Start generating</a>
|
||||
</div>
|
||||
|
||||
{% if pending_videos %}
|
||||
<div class="card mt-2">
|
||||
<h2>Pending Video Jobs</h2>
|
||||
<div class="image-grid">
|
||||
{% for vid in pending_videos %}
|
||||
<a
|
||||
href="{{ url_for('video_detail', video_id=vid.id) }}"
|
||||
class="image-grid-item"
|
||||
>
|
||||
<div
|
||||
style="
|
||||
background: #1a1a1a;
|
||||
border-radius: 6px;
|
||||
padding: 2rem;
|
||||
text-align: center;
|
||||
"
|
||||
>
|
||||
<span class="text-muted">{{ vid.status | capitalize }} …</span>
|
||||
</div>
|
||||
<p class="text-muted" style="font-size: 0.75rem; margin-top: 0.25rem">
|
||||
<strong>{{ vid.model_id }}</strong><br />{{ vid.prompt[:80] }}{% if
|
||||
vid.prompt|length > 80 %}…{% endif %}
|
||||
</p>
|
||||
</a>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</div>
|
||||
{% endif %} {% if generated_images %}
|
||||
<div class="card mt-2">
|
||||
<h2>Generated images</h2>
|
||||
<div class="image-grid">
|
||||
{% for img in generated_images %}
|
||||
<a
|
||||
href="{{ url_for('image_detail', image_id=img.id) }}"
|
||||
class="image-grid-item"
|
||||
>
|
||||
<img
|
||||
src="{{ img.image_data }}"
|
||||
alt="{{ img.prompt }}"
|
||||
class="generated-image"
|
||||
loading="lazy"
|
||||
/>
|
||||
<p class="text-muted" style="font-size: 0.75rem; margin-top: 0.25rem">
|
||||
<strong>{{ img.model_id }}</strong><br />{{ img.prompt[:80] }}{% if
|
||||
img.prompt|length > 80 %}…{% endif %}
|
||||
</p>
|
||||
</a>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</div>
|
||||
{% endif %} {% if completed_videos %}
|
||||
<div class="card mt-2">
|
||||
<h2>Generated videos</h2>
|
||||
<div class="image-grid">
|
||||
{% for vid in completed_videos %}
|
||||
<a
|
||||
href="{{ url_for('video_detail', video_id=vid.id) }}"
|
||||
class="image-grid-item"
|
||||
>
|
||||
{% if vid.video_url %}
|
||||
<video controls style="max-width: 100%; border-radius: 6px">
|
||||
<source src="{{ vid.video_url }}" />
|
||||
Your browser does not support the video tag.
|
||||
</video>
|
||||
{% else %}
|
||||
<div
|
||||
style="
|
||||
background: #1a1a1a;
|
||||
border-radius: 6px;
|
||||
padding: 2rem;
|
||||
text-align: center;
|
||||
"
|
||||
>
|
||||
<span class="text-muted">{{ vid.status | capitalize }} …</span>
|
||||
</div>
|
||||
{% endif %}
|
||||
<p class="text-muted" style="font-size: 0.75rem; margin-top: 0.25rem">
|
||||
<strong>{{ vid.model_id }}</strong><br />{{ vid.prompt[:80] }}{% if
|
||||
vid.prompt|length > 80 %}…{% endif %}<br />
|
||||
<em>{{ vid.status }}</em>
|
||||
</p>
|
||||
</a>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</div>
|
||||
{% endif %} {% if images %}
|
||||
<div class="card mt-2">
|
||||
<h2>Uploaded reference images</h2>
|
||||
<div class="image-grid">
|
||||
{% for img in images %}
|
||||
<a
|
||||
href="{{ url_for('upload_detail', image_id=img.id) }}"
|
||||
class="image-grid-item"
|
||||
>
|
||||
<img
|
||||
src="{{ url_for('serve_uploaded_image', image_id=img.id) }}"
|
||||
alt="{{ img.filename }}"
|
||||
class="generated-image"
|
||||
loading="lazy"
|
||||
/>
|
||||
<p class="text-muted" style="font-size: 0.75rem; margin-top: 0.25rem">
|
||||
{{ img.filename }} — {{ (img.size_bytes / 1024) | round(1) }} KB
|
||||
</p>
|
||||
</a>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</div>
|
||||
{% endif %} {% endblock %}
|
||||
@@ -0,0 +1,300 @@
|
||||
{% extends "base.html" %} {% block title %}My Gallery{% endblock %} {% block
|
||||
content %}
|
||||
<div
|
||||
class="container mx-auto px-4 py-8"
|
||||
data-current-page="1"
|
||||
data-per-page="12"
|
||||
>
|
||||
<div class="container mx-auto px-4 py-8">
|
||||
<h1 class="text-3xl font-bold mb-6">My Gallery</h1>
|
||||
|
||||
<!-- Pending Creations -->
|
||||
{% if pending_videos %}
|
||||
<div class="mb-12">
|
||||
<h2 class="text-2xl font-semibold mb-4 border-b border-gray-700 pb-2">
|
||||
Pending Creations
|
||||
</h2>
|
||||
<div
|
||||
class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4 gap-6"
|
||||
>
|
||||
{% for video in pending_videos %}
|
||||
<div
|
||||
class="block bg-gray-800 rounded-lg shadow-lg overflow-hidden hover:shadow-2xl transition-shadow duration-300 relative"
|
||||
data-pending-video-id="{{ video.id }}"
|
||||
>
|
||||
<a href="{{ url_for('video_detail', video_id=video.id) }}">
|
||||
<div class="p-4">
|
||||
<p class="font-bold text-lg truncate">{{ video.prompt }}</p>
|
||||
<p class="text-sm text-gray-400">
|
||||
Video Job Status:
|
||||
<span class="font-semibold text-yellow-400"
|
||||
>{{ video.status }}</span
|
||||
>
|
||||
</p>
|
||||
<p class="text-xs text-gray-500 mt-2">
|
||||
Started: {{ video.created_at | fromisoformat | humantime }}
|
||||
</p>
|
||||
</div>
|
||||
</a>
|
||||
<div class="px-4 pb-4">
|
||||
<button
|
||||
class="cancel-pending-btn px-3 py-1 bg-red-600 hover:bg-red-700 text-white rounded text-xs"
|
||||
data-video-id="{{ video.id }}"
|
||||
>
|
||||
Cancel
|
||||
</button>
|
||||
<span class="cancel-pending-msg text-xs ml-2 hidden"></span>
|
||||
</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</div>
|
||||
{% endif %}
|
||||
|
||||
<!-- Generated Images -->
|
||||
<div class="mb-12">
|
||||
<h2 class="text-2xl font-semibold mb-4 border-b border-gray-700 pb-2">
|
||||
Generated Images
|
||||
</h2>
|
||||
{% if generated_images %}
|
||||
<div
|
||||
class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4 gap-6"
|
||||
>
|
||||
{% for image in generated_images %}
|
||||
<a
|
||||
href="{{ url_for('image_detail', image_id=image.id) }}"
|
||||
class="block bg-gray-800 rounded-lg shadow-lg overflow-hidden hover:shadow-2xl transition-shadow duration-300"
|
||||
>
|
||||
<img
|
||||
src="{{ image.image_data }}"
|
||||
alt="{{ image.prompt }}"
|
||||
class="w-full h-48 object-cover"
|
||||
/>
|
||||
<div class="p-4">
|
||||
<p class="font-bold text-sm truncate">{{ image.prompt }}</p>
|
||||
<p class="text-xs text-gray-400 mt-1">
|
||||
Image ID: {{ image.id[:8] }}...
|
||||
</p>
|
||||
<p class="text-xs text-gray-500 mt-1">
|
||||
{{ image.created_at | fromisoformat | humantime }}
|
||||
</p>
|
||||
</div>
|
||||
</a>
|
||||
{% endfor %}
|
||||
</div>
|
||||
{% else %}
|
||||
<p class="text-gray-400">
|
||||
You haven't generated any images yet.
|
||||
<a
|
||||
href="{{ url_for('generate_image') }}"
|
||||
class="text-blue-400 hover:underline"
|
||||
>Generate one now</a
|
||||
>.
|
||||
</p>
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
<!-- Generated Videos -->
|
||||
<div class="mb-12">
|
||||
<h2 class="text-2xl font-semibold mb-4 border-b border-gray-700 pb-2">
|
||||
Generated Videos
|
||||
</h2>
|
||||
{% if completed_videos %}
|
||||
<div
|
||||
class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4 gap-6"
|
||||
>
|
||||
{% for video in completed_videos %}
|
||||
<a
|
||||
href="{{ url_for('video_detail', video_id=video.id) }}"
|
||||
class="block bg-gray-800 rounded-lg shadow-lg overflow-hidden hover:shadow-2xl transition-shadow duration-300"
|
||||
>
|
||||
{% if video.video_url %}
|
||||
<img
|
||||
src="{{ video.video_url }}#t=0.1"
|
||||
alt="{{ video.prompt }}"
|
||||
class="w-full h-48 object-cover"
|
||||
/>
|
||||
{% else %}
|
||||
<div class="w-full h-48 bg-black flex items-center justify-center">
|
||||
<svg
|
||||
class="w-12 h-12 text-gray-500"
|
||||
fill="none"
|
||||
stroke="currentColor"
|
||||
viewBox="0 0 24 24"
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
>
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
stroke-width="2"
|
||||
d="M14.752 11.168l-3.197-2.132A1 1 0 0010 9.87v4.263a1 1 0 001.555.832l3.197-2.132a1 1 0 000-1.664z"
|
||||
></path>
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
stroke-width="2"
|
||||
d="M21 12a9 9 0 11-18 0 9 9 0 0118 0z"
|
||||
></path>
|
||||
</svg>
|
||||
</div>
|
||||
{% endif %}
|
||||
<div class="p-4">
|
||||
<p class="font-bold text-sm truncate">{{ video.prompt }}</p>
|
||||
<p class="text-xs text-gray-400 mt-1">
|
||||
Video ID: {{ video.id[:8] }}...
|
||||
</p>
|
||||
<p class="text-xs text-gray-500 mt-1">
|
||||
{{ video.created_at | fromisoformat | humantime }}
|
||||
</p>
|
||||
</div>
|
||||
</a>
|
||||
{% endfor %}
|
||||
</div>
|
||||
{% else %}
|
||||
<p class="text-gray-400">
|
||||
You haven't generated any videos yet.
|
||||
<a
|
||||
href="{{ url_for('generate_video') }}"
|
||||
class="text-blue-400 hover:underline"
|
||||
>Generate one now</a
|
||||
>.
|
||||
</p>
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
<!-- Uploaded Images -->
|
||||
<div>
|
||||
<h2 class="text-2xl font-semibold mb-4 border-b border-gray-700 pb-2">
|
||||
My Uploads
|
||||
</h2>
|
||||
{% if uploads %}
|
||||
<div
|
||||
class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4 gap-6"
|
||||
>
|
||||
{% for image in uploads %}
|
||||
<a
|
||||
href="{{ url_for('upload_detail', image_id=image.id) }}"
|
||||
class="block bg-gray-800 rounded-lg shadow-lg overflow-hidden hover:shadow-2xl transition-shadow duration-300"
|
||||
>
|
||||
<img
|
||||
src="{{ url_for('serve_uploaded_image', image_id=image.id) }}"
|
||||
alt="{{ image.filename }}"
|
||||
class="w-full h-48 object-cover"
|
||||
/>
|
||||
<div class="p-4">
|
||||
<p class="font-bold text-sm truncate">{{ image.filename }}</p>
|
||||
<p class="text-xs text-gray-400 mt-1">
|
||||
Upload ID: {{ image.id[:8] }}...
|
||||
</p>
|
||||
<p class="text-xs text-gray-500 mt-1">
|
||||
{{ image.uploaded_at | fromisoformat | humantime }}
|
||||
</p>
|
||||
</div>
|
||||
</a>
|
||||
{% endfor %}
|
||||
</div>
|
||||
{% else %}
|
||||
<p class="text-gray-400">You haven't uploaded any images.</p>
|
||||
{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Infinite Scroll Loading Indicator -->
|
||||
<div id="loading-indicator" class="flex justify-center py-8 hidden">
|
||||
<div class="spinner"></div>
|
||||
</div>
|
||||
{% endblock %} {% block scripts %}
|
||||
<script>
|
||||
document.addEventListener("DOMContentLoaded", function () {
|
||||
const galleryContainers = document.querySelectorAll(".grid[data-grid]");
|
||||
const loadingIndicator = document.getElementById("loading-indicator");
|
||||
const container = document.querySelector(".container[data-current-page]");
|
||||
const currentPage = parseInt(container.dataset.currentPage);
|
||||
const perPage = parseInt(container.dataset.perPage);
|
||||
let isLoading = false;
|
||||
let hasMore = true;
|
||||
|
||||
// Add data-grid attribute to all gallery grids
|
||||
document
|
||||
.querySelectorAll(".grid")
|
||||
.forEach((grid) => grid.setAttribute("data-grid", ""));
|
||||
|
||||
// Infinite scroll handler
|
||||
window.addEventListener("scroll", async function () {
|
||||
if (!hasMore || isLoading) return;
|
||||
|
||||
const scrollPosition = window.innerHeight + window.scrollY;
|
||||
const bottomThreshold = document.body.offsetHeight - 1000;
|
||||
|
||||
if (scrollPosition >= bottomThreshold) {
|
||||
isLoading = true;
|
||||
loadingIndicator.classList.remove("hidden");
|
||||
// TODO: Implement actual fetching of next page of results and appending to the correct grid(s)
|
||||
// For demo purposes, we'll just simulate a delay and then hide the loading indicator
|
||||
// Simulate API call for next page
|
||||
// In real implementation, replace with actual backend fetch
|
||||
setTimeout(() => {
|
||||
isLoading = false;
|
||||
loadingIndicator.classList.add("hidden");
|
||||
// Real app would fetch /generate/images?page=${currentPage +1}&limit=${perPage}
|
||||
// and /generate/videos similarly
|
||||
}, 1500);
|
||||
}
|
||||
});
|
||||
// Cancel pending video buttons
|
||||
document.querySelectorAll(".cancel-pending-btn").forEach((btn) => {
|
||||
btn.addEventListener("click", async (e) => {
|
||||
e.preventDefault();
|
||||
e.stopPropagation();
|
||||
const videoId = btn.dataset.videoId;
|
||||
const msgEl = btn.parentElement.querySelector(".cancel-pending-msg");
|
||||
btn.disabled = true;
|
||||
btn.textContent = "Cancelling…";
|
||||
try {
|
||||
const resp = await fetch(
|
||||
"/generate/video/" + encodeURIComponent(videoId) + "/cancel",
|
||||
{ method: "POST" },
|
||||
);
|
||||
if (resp.ok) {
|
||||
btn.classList.add("hidden");
|
||||
if (msgEl) {
|
||||
msgEl.textContent = "Cancelled";
|
||||
msgEl.classList.remove("hidden", "text-red-500");
|
||||
msgEl.classList.add("text-gray-300");
|
||||
}
|
||||
const card = document.querySelector(
|
||||
'[data-pending-video-id="' + videoId + '"]',
|
||||
);
|
||||
if (card) {
|
||||
const statusSpan = card.querySelector(".text-yellow-400");
|
||||
if (statusSpan) {
|
||||
statusSpan.textContent = "cancelled";
|
||||
statusSpan.classList.remove("text-yellow-400");
|
||||
statusSpan.classList.add("text-gray-400");
|
||||
}
|
||||
}
|
||||
} else {
|
||||
const data = await resp.json().catch(() => ({}));
|
||||
btn.disabled = false;
|
||||
btn.textContent = "Cancel";
|
||||
if (msgEl) {
|
||||
msgEl.textContent = data.detail || "Failed";
|
||||
msgEl.classList.remove("hidden");
|
||||
msgEl.classList.add("text-red-500");
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
btn.disabled = false;
|
||||
btn.textContent = "Cancel";
|
||||
if (msgEl) {
|
||||
msgEl.textContent = "Error";
|
||||
msgEl.classList.remove("hidden");
|
||||
msgEl.classList.add("text-red-500");
|
||||
}
|
||||
}
|
||||
});
|
||||
});
|
||||
});
|
||||
</script>
|
||||
{% endblock %}
|
||||
</div>
|
||||
@@ -0,0 +1,9 @@
|
||||
{% extends "base.html" %} {% block title %}Generate — All You Can GET AI{%
|
||||
endblock %} {% block content %}
|
||||
<div class="card">
|
||||
<h1>Generate</h1>
|
||||
<p class="text-muted">
|
||||
Choose a generation type from the Generate menu above.
|
||||
</p>
|
||||
</div>
|
||||
{% endblock %}
|
||||
@@ -0,0 +1,82 @@
|
||||
{% extends "base.html" %}
|
||||
{% block title %}Image Generation — All You Can GET AI{% endblock %}
|
||||
{% block content %}
|
||||
<div class="card">
|
||||
<h1>Image Generation</h1>
|
||||
<form method="post" enctype="multipart/form-data">
|
||||
<label for="model">Model</label>
|
||||
{% if models %}
|
||||
<select id="model" name="model" required>
|
||||
{% for m in models %}
|
||||
<option value="{{ m.id }}" {{ "selected" if request.form.get('model', '') == m.id else "" }}>{{ m.name }}</option>
|
||||
{% endfor %}
|
||||
</select>
|
||||
{% else %}
|
||||
<input id="model" name="model" type="text" required
|
||||
placeholder="e.g. google/gemini-2.5-flash-image"
|
||||
value="{{ request.form.get('model', '') }}">
|
||||
<p class="text-muted mt-1">No models available</p>
|
||||
{% endif %}
|
||||
|
||||
<label for="prompt">Prompt</label>
|
||||
<textarea id="prompt" name="prompt" rows="4" required
|
||||
placeholder="Describe the image you want…">{{ request.form.get('prompt', '') }}</textarea>
|
||||
|
||||
<label for="aspect_ratio">Aspect ratio</label>
|
||||
<select id="aspect_ratio" name="aspect_ratio">
|
||||
<option value="">Auto (default 1:1)</option>
|
||||
<option value="1:1" {{ "selected" if request.form.get('aspect_ratio')=='1:1' else "" }}>1:1 (square)</option>
|
||||
<option value="16:9" {{ "selected" if request.form.get('aspect_ratio')=='16:9' else "" }}>16:9 (landscape)</option>
|
||||
<option value="9:16" {{ "selected" if request.form.get('aspect_ratio')=='9:16' else "" }}>9:16 (portrait)</option>
|
||||
<option value="4:3" {{ "selected" if request.form.get('aspect_ratio')=='4:3' else "" }}>4:3</option>
|
||||
<option value="3:4" {{ "selected" if request.form.get('aspect_ratio')=='3:4' else "" }}>3:4</option>
|
||||
<option value="3:2" {{ "selected" if request.form.get('aspect_ratio')=='3:2' else "" }}>3:2</option>
|
||||
<option value="2:3" {{ "selected" if request.form.get('aspect_ratio')=='2:3' else "" }}>2:3</option>
|
||||
</select>
|
||||
|
||||
<label for="image_size">Resolution</label>
|
||||
<select id="image_size" name="image_size">
|
||||
<option value="">Auto (default 1K)</option>
|
||||
<option value="0.5K" {{ "selected" if request.form.get('image_size')=='0.5K' else "" }}>0.5K (low)</option>
|
||||
<option value="1K" {{ "selected" if request.form.get('image_size')=='1K' else "" }}>1K (standard)</option>
|
||||
<option value="2K" {{ "selected" if request.form.get('image_size')=='2K' else "" }}>2K (high)</option>
|
||||
<option value="4K" {{ "selected" if request.form.get('image_size')=='4K' else "" }}>4K (ultra)</option>
|
||||
</select>
|
||||
|
||||
<label for="reference_image">Reference image (optional)</label>
|
||||
<input
|
||||
id="reference_image"
|
||||
name="reference_image"
|
||||
type="file"
|
||||
accept="image/png,image/jpeg,image/webp,image/gif"
|
||||
>
|
||||
<p class="text-muted mt-1" id="reference-image-help">
|
||||
Upload an image to use as visual reference (image-to-image).
|
||||
</p>
|
||||
<div class="image-upload-preview" id="image-upload-preview" hidden>
|
||||
<p class="text-muted" id="image-upload-filename"></p>
|
||||
<img id="image-upload-preview-img" alt="Uploaded reference image preview" class="generated-image">
|
||||
</div>
|
||||
|
||||
<button type="submit">Generate image</button>
|
||||
</form>
|
||||
|
||||
{% if error %}
|
||||
<div class="alert alert-error mt-2">{{ error }}</div>
|
||||
{% endif %}
|
||||
|
||||
{% if result %}
|
||||
<div class="result">
|
||||
<h2>Generated image{{ 's' if result.images|length > 1 }}</h2>
|
||||
{% for img in result.images %}
|
||||
{% if img.url %}
|
||||
<img src="{{ img.url }}" alt="Generated image" class="generated-image">
|
||||
{% endif %}
|
||||
{% if img.revised_prompt %}
|
||||
<p class="text-muted mt-1" style="font-size:0.8rem;">{{ img.revised_prompt }}</p>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endblock %}
|
||||
@@ -0,0 +1,93 @@
|
||||
{% extends "base.html" %} {% block title %}Text Generation — All You Can GET
|
||||
AI{% endblock %} {% block content %}
|
||||
<div class="card chat-page">
|
||||
<div class="chat-header">
|
||||
<h1>Text Chat</h1>
|
||||
<form method="post" style="display: inline">
|
||||
<input type="hidden" name="action" value="clear" />
|
||||
<button type="submit" class="btn-secondary btn-sm">New Chat</button>
|
||||
</form>
|
||||
</div>
|
||||
|
||||
<!-- Config row -->
|
||||
<details class="chat-config" {% if not chat_history %}open{% endif %}>
|
||||
<summary>Model & System Prompt</summary>
|
||||
<div class="chat-config-body">
|
||||
<label for="cfg-model">Model</label>
|
||||
{% if models %}
|
||||
<select id="cfg-model" form="chat-form" name="model" required>
|
||||
{% for m in models %}
|
||||
<option value="{{ m.id }}" {{ "selected" if current_model == m.id else "" }}>{{ m.name }}</option>
|
||||
{% endfor %}
|
||||
</select>
|
||||
{% else %}
|
||||
<input
|
||||
id="cfg-model"
|
||||
form="chat-form"
|
||||
name="model"
|
||||
type="text"
|
||||
required
|
||||
placeholder="e.g. openai/gpt-4o"
|
||||
value="{{ current_model }}"
|
||||
/>
|
||||
<p class="text-muted mt-1">No models available</p>
|
||||
{% endif %}
|
||||
|
||||
<label for="cfg-sys">System prompt (optional)</label>
|
||||
<textarea
|
||||
id="cfg-sys"
|
||||
form="chat-form"
|
||||
name="system_prompt"
|
||||
rows="2"
|
||||
placeholder="Set behavior/instructions for assistant…"
|
||||
>
|
||||
{{ system_prompt }}</textarea
|
||||
>
|
||||
</div>
|
||||
</details>
|
||||
|
||||
<!-- Chat history -->
|
||||
<div class="chat-history" id="chat-history">
|
||||
{% if not chat_history %}
|
||||
<p class="chat-empty">No messages yet. Start the conversation below.</p>
|
||||
{% endif %} {% for msg in chat_history %} {% if msg.role == "user" %}
|
||||
<div class="chat-bubble chat-bubble--user">
|
||||
<span class="bubble-role">You</span>
|
||||
<div class="bubble-content">{{ msg.content }}</div>
|
||||
</div>
|
||||
{% elif msg.role == "assistant" %}
|
||||
<div class="chat-bubble chat-bubble--assistant">
|
||||
<span class="bubble-role">Assistant</span>
|
||||
<div class="bubble-content">{{ msg.content }}</div>
|
||||
{% if msg.usage %}
|
||||
<span class="bubble-meta"
|
||||
>{{ msg.usage.get('total_tokens', '') }} tokens</span
|
||||
>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endif %} {% endfor %} {% if error %}
|
||||
<div class="alert alert-error">{{ error }}</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
<!-- Input -->
|
||||
<form id="chat-form" method="post" class="chat-input-row">
|
||||
<input type="hidden" name="action" value="send" />
|
||||
<textarea
|
||||
name="prompt"
|
||||
id="prompt"
|
||||
rows="2"
|
||||
required
|
||||
placeholder="Type a message…"
|
||||
class="chat-input-textarea"
|
||||
></textarea>
|
||||
<button type="submit" class="btn-primary">Send</button>
|
||||
</form>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
// Auto-scroll chat to bottom
|
||||
const hist = document.getElementById("chat-history");
|
||||
if (hist) hist.scrollTop = hist.scrollHeight;
|
||||
</script>
|
||||
{% endblock %}
|
||||
@@ -0,0 +1,192 @@
|
||||
{% extends "base.html" %} {% block title %}Video Generation — All You Can GET
|
||||
AI{% endblock %} {% block content %}
|
||||
<div class="card">
|
||||
<h1>Video Generation</h1>
|
||||
|
||||
<div class="tabs-container">
|
||||
<div class="tabs">
|
||||
<button class="tab-btn active" data-tab="text-to-video" type="button">
|
||||
Text to video
|
||||
</button>
|
||||
<button class="tab-btn" data-tab="image-to-video" type="button">
|
||||
Image to video
|
||||
</button>
|
||||
</div>
|
||||
|
||||
<!-- Text-to-video -->
|
||||
<div class="tab-panel active" id="tab-text-to-video">
|
||||
<form method="post">
|
||||
<input type="hidden" name="mode" value="text" />
|
||||
|
||||
<label for="model-t">Model</label>
|
||||
{% if models %}
|
||||
<select id="model-t" name="model" required>
|
||||
{% for m in models %}
|
||||
<option value="{{ m.id }}" {% if request.form.get('model', '') == m.id and request.form.get('mode','text')=='text' %}selected{% endif %}>{{ m.name }}</option>
|
||||
{% endfor %}
|
||||
</select>
|
||||
{% else %}
|
||||
<input
|
||||
id="model-t"
|
||||
name="model"
|
||||
type="text"
|
||||
required
|
||||
placeholder="e.g. openai/sora-2-pro"
|
||||
value="{{ request.form.get('model', '') if request.form.get('mode','text')=='text' else '' }}"
|
||||
/>
|
||||
<p class="text-muted mt-1">No models available</p>
|
||||
{% endif %}
|
||||
|
||||
<label for="prompt-t">Prompt</label>
|
||||
<textarea
|
||||
id="prompt-t"
|
||||
name="prompt"
|
||||
rows="4"
|
||||
required
|
||||
placeholder="Describe the video you want…"
|
||||
>
|
||||
{{ request.form.get('prompt', '') if request.form.get('mode','text')=='text' else '' }}</textarea
|
||||
>
|
||||
|
||||
<label for="aspect-t">Aspect ratio</label>
|
||||
<select id="aspect-t" name="aspect_ratio">
|
||||
<option value="16:9">16:9 (landscape)</option>
|
||||
<option value="9:16">9:16 (portrait)</option>
|
||||
<option value="1:1">1:1 (square)</option>
|
||||
</select>
|
||||
|
||||
<label for="res-t">Resolution</label>
|
||||
<select id="res-t" name="resolution">
|
||||
<option value="">Auto (default)</option>
|
||||
<option value="480p">480p</option>
|
||||
<option value="720p">720p</option>
|
||||
<option value="1080p">1080p</option>
|
||||
</select>
|
||||
|
||||
<label for="duration-t">Duration (seconds)</label>
|
||||
<select id="duration-t" name="duration_seconds">
|
||||
<option value="4">4s</option>
|
||||
<option value="8">8s</option>
|
||||
<option value="12" selected>12s</option>
|
||||
<option value="16">16s</option>
|
||||
<option value="20">20s</option>
|
||||
</select>
|
||||
|
||||
<button type="submit">Generate video</button>
|
||||
</form>
|
||||
</div>
|
||||
|
||||
<!-- Image-to-video -->
|
||||
<div class="tab-panel" id="tab-image-to-video">
|
||||
<form method="post">
|
||||
<input type="hidden" name="mode" value="image" />
|
||||
|
||||
<label for="model-i">Model</label>
|
||||
{% if models %}
|
||||
<select id="model-i" name="model" required>
|
||||
{% for m in models %}
|
||||
<option value="{{ m.id }}" {% if request.form.get('model', '') == m.id and request.form.get('mode')=='image' %}selected{% endif %}>{{ m.name }}</option>
|
||||
{% endfor %}
|
||||
</select>
|
||||
{% else %}
|
||||
<input
|
||||
id="model-i"
|
||||
name="model"
|
||||
type="text"
|
||||
required
|
||||
placeholder="e.g. openai/sora-2-pro"
|
||||
value="{{ request.form.get('model', '') if request.form.get('mode')=='image' else '' }}"
|
||||
/>
|
||||
<p class="text-muted mt-1">No models available</p>
|
||||
{% endif %}
|
||||
|
||||
<label for="image_url">Source image URL</label>
|
||||
<input
|
||||
id="image_url"
|
||||
name="image_url"
|
||||
type="url"
|
||||
required
|
||||
placeholder="https://example.com/photo.jpg"
|
||||
value="{{ request.form.get('image_url', '') }}"
|
||||
/>
|
||||
|
||||
<label for="prompt-i">Motion prompt</label>
|
||||
<textarea
|
||||
id="prompt-i"
|
||||
name="prompt"
|
||||
rows="3"
|
||||
required
|
||||
placeholder="Describe the motion or transformation…"
|
||||
>
|
||||
{{ request.form.get('prompt', '') if request.form.get('mode')=='image' else '' }}</textarea
|
||||
>
|
||||
|
||||
<label for="aspect-i">Aspect ratio</label>
|
||||
<select id="aspect-i" name="aspect_ratio">
|
||||
<option value="16:9">16:9 (landscape)</option>
|
||||
<option value="9:16">9:16 (portrait)</option>
|
||||
<option value="1:1">1:1 (square)</option>
|
||||
</select>
|
||||
|
||||
<label for="res-i">Resolution</label>
|
||||
<select id="res-i" name="resolution">
|
||||
<option value="">Auto (default)</option>
|
||||
<option value="480p">480p</option>
|
||||
<option value="720p">720p</option>
|
||||
<option value="1080p">1080p</option>
|
||||
</select>
|
||||
|
||||
<label for="duration-i">Duration (seconds)</label>
|
||||
<select id="duration-i" name="duration_seconds">
|
||||
<option value="4">4s</option>
|
||||
<option value="8">8s</option>
|
||||
<option value="12" selected>12s</option>
|
||||
<option value="16">16s</option>
|
||||
<option value="20">20s</option>
|
||||
</select>
|
||||
|
||||
<button type="submit">Generate video from image</button>
|
||||
</form>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{% if error %}
|
||||
<div class="alert alert-error mt-2">{{ error }}</div>
|
||||
{% endif %} {% if result %}
|
||||
<div class="result">
|
||||
<h2>Video job</h2>
|
||||
<p>Job ID: <code>{{ result.db_id or result.id }}</code></p>
|
||||
{% if result.status in ('queued', 'processing') and result.db_id %}
|
||||
<div id="video-poll-status" data-video-id="{{ result.db_id }}">
|
||||
<p>
|
||||
<span id="poll-status-text"
|
||||
>Status: <strong>{{ result.status }}</strong></span
|
||||
>
|
||||
— checking for updates every 5 s…
|
||||
</p>
|
||||
<div id="poll-video-container"></div>
|
||||
<button
|
||||
id="cancel-video-btn"
|
||||
class="mt-2 px-4 py-2 bg-red-600 hover:bg-red-700 text-white rounded-md text-sm"
|
||||
>
|
||||
Cancel Job
|
||||
</button>
|
||||
<p id="cancel-msg" class="text-sm mt-2 hidden"></p>
|
||||
</div>
|
||||
{% elif result.video_url %}
|
||||
<video
|
||||
src="{{ result.video_url }}"
|
||||
controls
|
||||
class="generated-video"
|
||||
></video>
|
||||
{% elif result.status == 'failed' %}
|
||||
<div class="alert alert-error">
|
||||
Generation failed: {{ result.error or 'Unknown error' }}
|
||||
</div>
|
||||
{% else %}
|
||||
<p>Status: <strong>{{ result.status }}</strong></p>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endblock %}
|
||||
@@ -0,0 +1,35 @@
|
||||
{% extends "base.html" %} {% block title %}Generated Image{% endblock %} {%
|
||||
block content %}
|
||||
<div class="container mx-auto px-4 py-8">
|
||||
<a
|
||||
href="{{ url_for('gallery') }}"
|
||||
class="text-blue-400 hover:underline mb-4 inline-block"
|
||||
>← Back to Gallery</a
|
||||
>
|
||||
|
||||
{% if image %}
|
||||
<h1 class="text-2xl font-bold mb-4">Generated Image</h1>
|
||||
<div class="bg-gray-800 rounded-lg shadow-lg overflow-hidden">
|
||||
<img
|
||||
src="{{ image.image_data }}"
|
||||
alt="{{ image.prompt }}"
|
||||
class="w-full object-contain"
|
||||
/>
|
||||
<div class="p-6">
|
||||
<h2 class="text-xl font-semibold mb-2">Prompt</h2>
|
||||
<p class="text-gray-300 bg-gray-900 p-3 rounded-md">{{ image.prompt }}</p>
|
||||
<div class="mt-4 text-sm text-gray-400">
|
||||
<p><strong>Model:</strong> {{ image.model_id }}</p>
|
||||
<p>
|
||||
<strong>Created:</strong> {{ image.created_at | fromisoformat |
|
||||
humantime }}
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% else %}
|
||||
<h1 class="text-2xl font-bold">Image not found</h1>
|
||||
<p class="text-gray-400 mt-2">Could not find details for this image.</p>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endblock %}
|
||||
@@ -0,0 +1,16 @@
|
||||
{% extends "base.html" %} {% block title %}Log in — All You Can GET AI{%
|
||||
endblock %} {% block content %}
|
||||
<div class="card">
|
||||
<h1>Log in</h1>
|
||||
<form method="post">
|
||||
<label for="email">Email</label>
|
||||
<input id="email" name="email" type="email" required autofocus />
|
||||
|
||||
<label for="password">Password</label>
|
||||
<input id="password" name="password" type="password" required />
|
||||
|
||||
<button type="submit">Log in</button>
|
||||
</form>
|
||||
<p>No account? <a href="{{ url_for('register') }}">Register</a></p>
|
||||
</div>
|
||||
{% endblock %}
|
||||
@@ -0,0 +1,43 @@
|
||||
{% extends "base.html" %} {% block title %}Profile — All You Can GET AI{%
|
||||
endblock %} {% block content %}
|
||||
<div class="card">
|
||||
<h1>Your Profile</h1>
|
||||
|
||||
<h2 class="section-title" style="margin-top: 0">Account details</h2>
|
||||
<p class="text-muted" style="font-size: 0.875rem; margin-bottom: 1.5rem">
|
||||
Current email:
|
||||
<strong style="color: var(--text)">{{ user.get('email', '') }}</strong>
|
||||
· Role:
|
||||
<span class="role-badge role-{{ user.get('role','user') }}"
|
||||
>{{ user.get('role', 'user') }}</span
|
||||
>
|
||||
</p>
|
||||
|
||||
<h2 class="section-title">Update email</h2>
|
||||
<form method="post">
|
||||
<label for="email">New email</label>
|
||||
<input
|
||||
id="email"
|
||||
name="email"
|
||||
type="email"
|
||||
placeholder="{{ user.get('email', '') }}"
|
||||
/>
|
||||
<input type="hidden" name="password" value="" />
|
||||
<button type="submit">Save email</button>
|
||||
</form>
|
||||
|
||||
<h2 class="section-title" style="margin-top: 2rem">Change password</h2>
|
||||
<form method="post">
|
||||
<label for="password">New password</label>
|
||||
<input
|
||||
id="password"
|
||||
name="password"
|
||||
type="password"
|
||||
placeholder="Enter new password"
|
||||
minlength="8"
|
||||
/>
|
||||
<input type="hidden" name="email" value="" />
|
||||
<button type="submit">Save password</button>
|
||||
</form>
|
||||
</div>
|
||||
{% endblock %}
|
||||
@@ -0,0 +1,22 @@
|
||||
{% extends "base.html" %} {% block title %}Register — All You Can GET AI{%
|
||||
endblock %} {% block content %}
|
||||
<div class="card">
|
||||
<h1>Create account</h1>
|
||||
<form method="post">
|
||||
<label for="email">Email</label>
|
||||
<input id="email" name="email" type="email" required autofocus />
|
||||
|
||||
<label for="password">Password</label>
|
||||
<input
|
||||
id="password"
|
||||
name="password"
|
||||
type="password"
|
||||
required
|
||||
minlength="8"
|
||||
/>
|
||||
|
||||
<button type="submit">Register</button>
|
||||
</form>
|
||||
<p>Already have an account? <a href="{{ url_for('login') }}">Log in</a></p>
|
||||
</div>
|
||||
{% endblock %}
|
||||
@@ -0,0 +1,40 @@
|
||||
{% extends "base.html" %} {% block title %}Uploaded Image{% endblock %} {% block
|
||||
content %}
|
||||
<div class="container mx-auto px-4 py-8">
|
||||
<a
|
||||
href="{{ url_for('gallery') }}"
|
||||
class="text-blue-400 hover:underline mb-4 inline-block"
|
||||
>← Back to Gallery</a
|
||||
>
|
||||
|
||||
{% if image %}
|
||||
<h1 class="text-2xl font-bold mb-4">Uploaded Image</h1>
|
||||
<div class="bg-gray-800 rounded-lg shadow-lg overflow-hidden">
|
||||
<img
|
||||
src="{{ url_for('serve_uploaded_image', image_id=image.id) }}"
|
||||
alt="{{ image.filename }}"
|
||||
class="w-full object-contain"
|
||||
/>
|
||||
<div class="p-6">
|
||||
<h2 class="text-xl font-semibold mb-2">Details</h2>
|
||||
<div class="mt-4 text-sm text-gray-400">
|
||||
<p><strong>Filename:</strong> {{ image.filename }}</p>
|
||||
<p><strong>Content Type:</strong> {{ image.content_type }}</p>
|
||||
<p>
|
||||
<strong>Size:</strong> {{ (image.size_bytes / 1024) | round(2) }} KB
|
||||
</p>
|
||||
<p>
|
||||
<strong>Uploaded:</strong> {{ image.created_at | fromisoformat |
|
||||
humantime }}
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% else %}
|
||||
<h1 class="text-2xl font-bold">Image not found</h1>
|
||||
<p class="text-gray-400 mt-2">
|
||||
Could not find details for this uploaded image.
|
||||
</p>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endblock %}
|
||||
@@ -0,0 +1,77 @@
|
||||
{% extends "base.html" %} {% block title %}Generated Video{% endblock %} {%
|
||||
block content %}
|
||||
<div class="container mx-auto px-4 py-8">
|
||||
<a
|
||||
href="{{ url_for('gallery') }}"
|
||||
class="text-blue-400 hover:underline mb-4 inline-block"
|
||||
>← Back to Gallery</a
|
||||
>
|
||||
|
||||
{% if video %}
|
||||
<h1 class="text-2xl font-bold mb-4">Video Generation Job</h1>
|
||||
<div class="bg-gray-800 rounded-lg shadow-lg overflow-hidden">
|
||||
{% if video.status == 'completed' and video.video_url %}
|
||||
<video src="{{ video.video_url }}" controls class="w-full"></video>
|
||||
{% elif video.status in ('queued', 'processing') %}
|
||||
<div
|
||||
class="w-full bg-black aspect-video flex flex-col items-center justify-center p-6 text-center"
|
||||
id="video-poll-status"
|
||||
data-video-id="{{ video.id }}"
|
||||
>
|
||||
<p class="text-xl font-semibold">
|
||||
Status: <strong id="poll-status-text">{{ video.status }}</strong>
|
||||
</p>
|
||||
<p class="text-gray-400 mt-2">
|
||||
Your video is being processed. This page will update automatically when
|
||||
it's ready.
|
||||
</p>
|
||||
<div class="spinner mt-4"></div>
|
||||
<button
|
||||
id="cancel-video-btn"
|
||||
class="mt-4 px-4 py-2 bg-red-600 hover:bg-red-700 text-white rounded-md text-sm"
|
||||
>
|
||||
Cancel Job
|
||||
</button>
|
||||
<p id="cancel-msg" class="text-sm mt-2 hidden"></p>
|
||||
</div>
|
||||
{% elif video.status == 'failed' %}
|
||||
<div
|
||||
class="w-full bg-black aspect-video flex flex-col items-center justify-center p-6 text-center"
|
||||
>
|
||||
<p class="text-xl font-semibold text-red-500">Generation Failed</p>
|
||||
<p class="text-gray-400 mt-2">
|
||||
{{ video.error or 'An unknown error occurred.' }}
|
||||
</p>
|
||||
</div>
|
||||
{% else %}
|
||||
<div
|
||||
class="w-full bg-black aspect-video flex flex-col items-center justify-center p-6 text-center"
|
||||
>
|
||||
<p class="text-xl font-semibold">Video Not Available</p>
|
||||
<p class="text-gray-400 mt-2">Status: {{ video.status }}</p>
|
||||
</div>
|
||||
{% endif %}
|
||||
|
||||
<div class="p-6">
|
||||
<h2 class="text-xl font-semibold mb-2">Prompt</h2>
|
||||
<p class="text-gray-300 bg-gray-900 p-3 rounded-md">{{ video.prompt }}</p>
|
||||
<div class="mt-4 text-sm text-gray-400">
|
||||
<p><strong>Model:</strong> {{ video.model_id }}</p>
|
||||
<p><strong>Job ID:</strong> <code>{{ video.job_id }}</code></p>
|
||||
<p>
|
||||
<strong>Created:</strong> {{ video.created_at | fromisoformat |
|
||||
humantime }}
|
||||
</p>
|
||||
<p>
|
||||
<strong>Last Update:</strong> {{ video.updated_at | fromisoformat |
|
||||
humantime }}
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% else %}
|
||||
<h1 class="text-2xl font-bold">Video job not found</h1>
|
||||
<p class="text-gray-400 mt-2">Could not find details for this video job.</p>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endblock %}
|
||||
@@ -0,0 +1,2 @@
|
||||
pytest
|
||||
pytest-mock
|
||||
@@ -0,0 +1,7 @@
|
||||
Flask
|
||||
gunicorn
|
||||
httpx
|
||||
itsdangerous
|
||||
Jinja2
|
||||
MarkupSafe
|
||||
Werkzeug
|
||||
@@ -0,0 +1,622 @@
|
||||
"""Frontend integration tests — backend API calls are fully mocked."""
|
||||
import os
|
||||
import pytest
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
os.environ.setdefault("FLASK_SECRET_KEY", "test-secret")
|
||||
os.environ.setdefault("BACKEND_URL", "http://backend-mock")
|
||||
|
||||
from app.main import app # noqa: E402
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def client():
|
||||
app.config["TESTING"] = True
|
||||
app.config["WTF_CSRF_ENABLED"] = False
|
||||
with app.test_client() as c:
|
||||
yield c
|
||||
|
||||
|
||||
def _mock_response(status_code: int, json_data) -> MagicMock:
|
||||
m = MagicMock()
|
||||
m.status_code = status_code
|
||||
m.json.return_value = json_data
|
||||
return m
|
||||
|
||||
|
||||
def _set_auth(client, role: str = "user"):
|
||||
with client.session_transaction() as sess:
|
||||
sess["access_token"] = "tok"
|
||||
sess["refresh_token"] = "ref"
|
||||
sess["user_role"] = role
|
||||
sess["user_email"] = "u@example.com"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Index redirect
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_index_redirects_to_login(client):
|
||||
resp = client.get("/")
|
||||
assert resp.status_code == 302
|
||||
assert "/login" in resp.headers["Location"]
|
||||
|
||||
|
||||
def test_index_redirects_to_dashboard_when_logged_in(client):
|
||||
_set_auth(client)
|
||||
resp = client.get("/")
|
||||
assert resp.status_code == 302
|
||||
assert "/dashboard" in resp.headers["Location"]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Login
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_login_page_renders(client):
|
||||
resp = client.get("/login")
|
||||
assert resp.status_code == 200
|
||||
assert b"Log in" in resp.data
|
||||
|
||||
|
||||
def test_login_success(client):
|
||||
login_mock = _mock_response(
|
||||
200, {"access_token": "acc", "refresh_token": "ref"})
|
||||
me_mock = _mock_response(
|
||||
200, {"id": "1", "email": "u@example.com", "role": "user"})
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[login_mock, me_mock]):
|
||||
resp = client.post(
|
||||
"/login", data={"email": "u@example.com", "password": "secret"})
|
||||
assert resp.status_code == 302
|
||||
assert "/dashboard" in resp.headers["Location"]
|
||||
|
||||
|
||||
def test_login_stores_role_in_session(client):
|
||||
login_mock = _mock_response(
|
||||
200, {"access_token": "acc", "refresh_token": "ref"})
|
||||
me_mock = _mock_response(
|
||||
200, {"id": "1", "email": "admin@example.com", "role": "admin"})
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[login_mock, me_mock]):
|
||||
client.post(
|
||||
"/login", data={"email": "admin@example.com", "password": "secret"})
|
||||
with client.session_transaction() as sess:
|
||||
assert sess["user_role"] == "admin"
|
||||
|
||||
|
||||
def test_login_failure_shows_error(client):
|
||||
mock = _mock_response(401, {"detail": "Invalid credentials."})
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
resp = client.post(
|
||||
"/login", data={"email": "u@example.com", "password": "wrong"})
|
||||
assert resp.status_code == 200
|
||||
assert b"Invalid email or password" in resp.data
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Register
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_register_page_renders(client):
|
||||
resp = client.get("/register")
|
||||
assert resp.status_code == 200
|
||||
assert b"Create account" in resp.data
|
||||
|
||||
|
||||
def test_register_success_redirects_to_login(client):
|
||||
mock = _mock_response(
|
||||
201, {"id": "abc", "email": "u@example.com", "role": "user"})
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
resp = client.post(
|
||||
"/register", data={"email": "u@example.com", "password": "secret123"})
|
||||
assert resp.status_code == 302
|
||||
assert "/login" in resp.headers["Location"]
|
||||
|
||||
|
||||
def test_register_duplicate_shows_error(client):
|
||||
mock = _mock_response(409, {"detail": "Email already registered."})
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
resp = client.post(
|
||||
"/register", data={"email": "dup@example.com", "password": "secret123"})
|
||||
assert resp.status_code == 200
|
||||
assert b"Email already registered" in resp.data
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Logout
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_logout_clears_session_and_redirects(client):
|
||||
_set_auth(client)
|
||||
mock = _mock_response(204, {})
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
resp = client.get("/logout")
|
||||
assert resp.status_code == 302
|
||||
assert "/login" in resp.headers["Location"]
|
||||
with client.session_transaction() as sess:
|
||||
assert "access_token" not in sess
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Dashboard
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_dashboard_requires_login(client):
|
||||
resp = client.get("/dashboard")
|
||||
assert resp.status_code == 302
|
||||
assert "/login" in resp.headers["Location"]
|
||||
|
||||
|
||||
def test_dashboard_renders_user_info(client):
|
||||
_set_auth(client)
|
||||
me_mock = _mock_response(
|
||||
200, {"id": "1", "email": "u@example.com", "role": "user"})
|
||||
images_mock = _mock_response(200, [])
|
||||
gen_images_mock = _mock_response(200, [])
|
||||
gen_videos_mock = _mock_response(200, [])
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[me_mock, images_mock, gen_images_mock, gen_videos_mock]):
|
||||
resp = client.get("/dashboard")
|
||||
assert resp.status_code == 200
|
||||
assert b"u@example.com" in resp.data
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Generate — redirect + separate pages
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_generate_redirects_to_text(client):
|
||||
_set_auth(client)
|
||||
resp = client.get("/generate")
|
||||
assert resp.status_code == 302
|
||||
assert "/generate/text" in resp.headers["Location"]
|
||||
|
||||
|
||||
def test_generate_text_page_renders(client):
|
||||
_set_auth(client)
|
||||
resp = client.get("/generate/text")
|
||||
assert resp.status_code == 200
|
||||
assert b"Text Chat" in resp.data
|
||||
|
||||
|
||||
def test_generate_text_requires_login(client):
|
||||
resp = client.get("/generate/text")
|
||||
assert resp.status_code == 302
|
||||
assert "/login" in resp.headers["Location"]
|
||||
|
||||
|
||||
def test_generate_text_success(client):
|
||||
_set_auth(client)
|
||||
gen_mock = _mock_response(
|
||||
200, {"id": "g1", "model": "openai/gpt-4o", "content": "Hello world", "usage": None})
|
||||
models_mock = _mock_response(200, [
|
||||
{"id": "openai/gpt-4o", "name": "GPT-4o", "modality": "text"}
|
||||
])
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[gen_mock, models_mock]):
|
||||
resp = client.post(
|
||||
"/generate/text",
|
||||
data={"model": "openai/gpt-4o", "prompt": "Say hello", "action": "send"})
|
||||
assert resp.status_code == 200
|
||||
assert b"Hello world" in resp.data
|
||||
assert b"chat-bubble--assistant" in resp.data
|
||||
|
||||
|
||||
def test_generate_text_page_shows_optional_system_prompt(client):
|
||||
_set_auth(client)
|
||||
models_mock = _mock_response(200, [])
|
||||
with patch("frontend.app.main.httpx.request", return_value=models_mock):
|
||||
resp = client.get("/generate/text")
|
||||
assert resp.status_code == 200
|
||||
assert b"System prompt (optional)" in resp.data
|
||||
assert b'name="system_prompt"' in resp.data
|
||||
|
||||
|
||||
def test_generate_text_forwards_system_prompt(client):
|
||||
_set_auth(client)
|
||||
gen_mock = _mock_response(
|
||||
200, {"id": "g1", "model": "openai/gpt-4o", "content": "Hello world", "usage": None})
|
||||
models_mock = _mock_response(200, [
|
||||
{"id": "openai/gpt-4o", "name": "GPT-4o", "modality": "text"}
|
||||
])
|
||||
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[gen_mock, models_mock]) as mock_request:
|
||||
resp = client.post(
|
||||
"/generate/text",
|
||||
data={
|
||||
"model": "openai/gpt-4o",
|
||||
"prompt": "Say hello",
|
||||
"system_prompt": "You are concise.",
|
||||
"action": "send",
|
||||
},
|
||||
)
|
||||
|
||||
assert resp.status_code == 200
|
||||
first_call_kwargs = mock_request.call_args_list[0].kwargs
|
||||
assert first_call_kwargs["json"]["system_prompt"] == "You are concise."
|
||||
# Messages array sent (not bare prompt)
|
||||
assert "messages" in first_call_kwargs["json"]
|
||||
|
||||
|
||||
def test_generate_text_chat_history_accumulates(client):
|
||||
"""Second message includes prior user+assistant turns in messages array."""
|
||||
_set_auth(client)
|
||||
|
||||
turn1_gen = _mock_response(
|
||||
200, {"id": "g1", "model": "openai/gpt-4o", "content": "Turn 1 reply", "usage": None})
|
||||
turn1_models = _mock_response(
|
||||
200, [{"id": "openai/gpt-4o", "name": "GPT-4o", "modality": "text"}])
|
||||
turn2_gen = _mock_response(
|
||||
200, {"id": "g2", "model": "openai/gpt-4o", "content": "Turn 2 reply", "usage": None})
|
||||
turn2_models = _mock_response(
|
||||
200, [{"id": "openai/gpt-4o", "name": "GPT-4o", "modality": "text"}])
|
||||
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[turn1_gen, turn1_models]):
|
||||
client.post(
|
||||
"/generate/text", data={"model": "openai/gpt-4o", "prompt": "First", "action": "send"})
|
||||
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[turn2_gen, turn2_models]) as mock_req:
|
||||
resp = client.post(
|
||||
"/generate/text", data={"model": "openai/gpt-4o", "prompt": "Second", "action": "send"})
|
||||
|
||||
assert resp.status_code == 200
|
||||
assert b"Turn 1 reply" in resp.data
|
||||
assert b"Turn 2 reply" in resp.data
|
||||
# Backend received 3 messages: First(user), Turn1(assistant), Second(user)
|
||||
sent_messages = mock_req.call_args_list[0].kwargs["json"]["messages"]
|
||||
assert len(sent_messages) == 3
|
||||
assert sent_messages[0]["role"] == "user" and sent_messages[0]["content"] == "First"
|
||||
assert sent_messages[1]["role"] == "assistant"
|
||||
assert sent_messages[2]["role"] == "user" and sent_messages[2]["content"] == "Second"
|
||||
|
||||
|
||||
def test_generate_text_clear_resets_history(client):
|
||||
"""Clear action removes session history and redirects."""
|
||||
_set_auth(client)
|
||||
|
||||
gen_mock = _mock_response(
|
||||
200, {"id": "g1", "model": "openai/gpt-4o", "content": "Reply", "usage": None})
|
||||
models_mock = _mock_response(
|
||||
200, [{"id": "openai/gpt-4o", "name": "GPT-4o", "modality": "text"}])
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[gen_mock, models_mock]):
|
||||
client.post(
|
||||
"/generate/text", data={"model": "openai/gpt-4o", "prompt": "Hi", "action": "send"})
|
||||
|
||||
clear_resp = client.post("/generate/text", data={"action": "clear"})
|
||||
assert clear_resp.status_code == 302
|
||||
|
||||
models_mock2 = _mock_response(
|
||||
200, [{"id": "openai/gpt-4o", "name": "GPT-4o", "modality": "text"}])
|
||||
with patch("frontend.app.main.httpx.request", return_value=models_mock2):
|
||||
get_resp = client.get("/generate/text")
|
||||
assert b"No messages yet" in get_resp.data
|
||||
|
||||
|
||||
def test_generate_image_page_renders(client):
|
||||
_set_auth(client)
|
||||
resp = client.get("/generate/image")
|
||||
assert resp.status_code == 200
|
||||
assert b"Image Generation" in resp.data
|
||||
assert b"reference_image" in resp.data
|
||||
|
||||
|
||||
def test_generate_image_success(client):
|
||||
_set_auth(client)
|
||||
mock = _mock_response(200, {
|
||||
"id": "g2", "model": "openai/dall-e-3",
|
||||
"images": [{"url": "https://example.com/img.png", "revised_prompt": None, "b64_json": None}]
|
||||
})
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
resp = client.post("/generate/image", data={
|
||||
"model": "openai/dall-e-3", "prompt": "A cat", "n": "1", "size": "1024x1024"
|
||||
})
|
||||
assert resp.status_code == 200
|
||||
assert b"example.com/img.png" in resp.data
|
||||
|
||||
|
||||
def test_generate_video_page_renders(client):
|
||||
_set_auth(client)
|
||||
resp = client.get("/generate/video")
|
||||
assert resp.status_code == 200
|
||||
assert b"Video Generation" in resp.data
|
||||
|
||||
|
||||
def test_generate_video_text_mode(client):
|
||||
_set_auth(client)
|
||||
mock = _mock_response(
|
||||
200, {"id": "v1", "model": "openai/sora-2-pro", "status": "queued"})
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
resp = client.post("/generate/video", data={
|
||||
"mode": "text", "model": "openai/sora-2-pro",
|
||||
"prompt": "A sunset", "aspect_ratio": "16:9",
|
||||
"duration_seconds": "10", "resolution": "720p",
|
||||
})
|
||||
assert resp.status_code == 200
|
||||
assert b"queued" in resp.data
|
||||
|
||||
|
||||
def test_generate_video_image_mode(client):
|
||||
_set_auth(client)
|
||||
mock = _mock_response(
|
||||
200, {"id": "v2", "model": "openai/sora-2-pro", "status": "processing"})
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
resp = client.post("/generate/video", data={
|
||||
"mode": "image", "model": "openai/sora-2-pro",
|
||||
"image_url": "https://example.com/img.png",
|
||||
"prompt": "Pan right", "aspect_ratio": "16:9"
|
||||
})
|
||||
assert resp.status_code == 200
|
||||
assert b"processing" in resp.data
|
||||
|
||||
|
||||
def test_generate_upstream_error_shows_message(client):
|
||||
_set_auth(client)
|
||||
gen_mock = _mock_response(502, {"detail": "OpenRouter error: timeout"})
|
||||
models_mock = _mock_response(200, [
|
||||
{"id": "openai/gpt-4o", "name": "GPT-4o", "modality": "text"}
|
||||
])
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[gen_mock, models_mock]):
|
||||
resp = client.post(
|
||||
"/generate/text", data={"model": "openai/gpt-4o", "prompt": "Hi"})
|
||||
assert resp.status_code == 200
|
||||
assert b"OpenRouter error" in resp.data
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Admin
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_admin_requires_login(client):
|
||||
resp = client.get("/admin")
|
||||
assert resp.status_code == 302
|
||||
assert "/login" in resp.headers["Location"]
|
||||
|
||||
|
||||
def test_admin_requires_admin_role(client):
|
||||
_set_auth(client, role="user")
|
||||
resp = client.get("/admin")
|
||||
assert resp.status_code == 302
|
||||
assert "/dashboard" in resp.headers["Location"]
|
||||
|
||||
|
||||
def test_admin_page_renders(client):
|
||||
_set_auth(client, role="admin")
|
||||
stats_mock = _mock_response(
|
||||
200, {"total_users": 5, "active_refresh_tokens": 3, "admin_users": 1})
|
||||
users_mock = _mock_response(200, [
|
||||
{"id": "u1", "email": "a@example.com", "role": "admin"},
|
||||
{"id": "u2", "email": "b@example.com", "role": "user"},
|
||||
])
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[stats_mock, users_mock]):
|
||||
resp = client.get("/admin")
|
||||
assert resp.status_code == 200
|
||||
assert b"a@example.com" in resp.data
|
||||
assert b"b@example.com" in resp.data
|
||||
|
||||
|
||||
def test_admin_set_role(client):
|
||||
_set_auth(client, role="admin")
|
||||
mock = _mock_response(
|
||||
200, {"id": "u2", "email": "b@example.com", "role": "admin"})
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
resp = client.post("/admin/users/u2/role", data={"role": "admin"})
|
||||
assert resp.status_code == 302
|
||||
assert "/admin" in resp.headers["Location"]
|
||||
|
||||
|
||||
def test_admin_delete_user(client):
|
||||
_set_auth(client, role="admin")
|
||||
mock = _mock_response(204, {})
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
resp = client.post("/admin/users/u2/delete")
|
||||
assert resp.status_code == 302
|
||||
assert "/admin" in resp.headers["Location"]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Profile
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_profile_requires_login(client):
|
||||
resp = client.get("/users/profile")
|
||||
assert resp.status_code == 302
|
||||
assert "/login" in resp.headers["Location"]
|
||||
|
||||
|
||||
def test_profile_page_renders(client):
|
||||
_set_auth(client)
|
||||
mock = _mock_response(
|
||||
200, {"id": "1", "email": "u@example.com", "role": "user"})
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
resp = client.get("/users/profile")
|
||||
assert resp.status_code == 200
|
||||
assert b"Profile" in resp.data
|
||||
assert b"u@example.com" in resp.data
|
||||
|
||||
|
||||
def test_profile_update_email(client):
|
||||
_set_auth(client)
|
||||
mock = _mock_response(
|
||||
200, {"id": "1", "email": "new@example.com", "role": "user"})
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
resp = client.post(
|
||||
"/users/profile", data={"email": "new@example.com", "password": ""})
|
||||
assert resp.status_code == 302
|
||||
assert "/users/profile" in resp.headers["Location"]
|
||||
|
||||
|
||||
def test_profile_update_failure(client):
|
||||
_set_auth(client)
|
||||
mock = _mock_response(422, {"detail": "Invalid email."})
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
resp = client.post(
|
||||
"/users/profile", data={"email": "bad", "password": ""})
|
||||
# redirects regardless, flash message shown on next GET
|
||||
assert resp.status_code == 302
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# GET /generate/video/status (polling proxy)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_video_status_proxy_completed(client):
|
||||
_set_auth(client)
|
||||
mock = _mock_response(200, {
|
||||
"id": "v1", "model": "", "status": "completed",
|
||||
"video_url": "https://example.com/video.mp4",
|
||||
"unsigned_urls": ["https://example.com/video.mp4"],
|
||||
})
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
resp = client.get(
|
||||
"/generate/video/status",
|
||||
query_string={
|
||||
"polling_url": "https://openrouter.ai/api/v1/videos/v1"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
data = resp.get_json()
|
||||
assert data["status"] == "completed"
|
||||
assert data["video_url"] == "https://example.com/video.mp4"
|
||||
|
||||
|
||||
def test_video_status_proxy_processing(client):
|
||||
_set_auth(client)
|
||||
mock = _mock_response(
|
||||
200, {"id": "v1", "model": "", "status": "processing"})
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
resp = client.get(
|
||||
"/generate/video/status",
|
||||
query_string={
|
||||
"polling_url": "https://openrouter.ai/api/v1/videos/v1"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
assert resp.get_json()["status"] == "processing"
|
||||
|
||||
|
||||
def test_video_status_proxy_requires_login(client):
|
||||
resp = client.get(
|
||||
"/generate/video/status",
|
||||
query_string={"polling_url": "https://openrouter.ai/api/v1/videos/v1"},
|
||||
)
|
||||
assert resp.status_code == 302
|
||||
assert "/login" in resp.headers["Location"]
|
||||
|
||||
|
||||
def test_video_status_proxy_missing_url(client):
|
||||
_set_auth(client)
|
||||
resp = client.get("/generate/video/status")
|
||||
assert resp.status_code == 400
|
||||
assert b"polling_url" in resp.data
|
||||
|
||||
|
||||
def test_video_generate_renders_polling_ui(client):
|
||||
"""When response has polling_url, template shows polling div."""
|
||||
_set_auth(client)
|
||||
mock = _mock_response(200, {
|
||||
"id": "v1", "model": "openai/sora-2-pro", "status": "queued",
|
||||
"polling_url": "https://openrouter.ai/api/v1/videos/v1",
|
||||
})
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
resp = client.post("/generate/video", data={
|
||||
"mode": "text", "model": "openai/sora-2-pro",
|
||||
"prompt": "A sunset", "aspect_ratio": "16:9",
|
||||
})
|
||||
assert resp.status_code == 200
|
||||
assert b"video-poll-status" in resp.data
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Image upload — frontend proxy + dashboard
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_dashboard_shows_uploaded_images(client):
|
||||
_set_auth(client)
|
||||
me_mock = _mock_response(
|
||||
200, {"id": "1", "email": "u@example.com", "role": "user"})
|
||||
images_mock = _mock_response(200, [
|
||||
{"id": "img-1", "filename": "cat.png", "content_type": "image/png",
|
||||
"size_bytes": 1024, "created_at": "2026-04-29T10:00:00"},
|
||||
])
|
||||
gen_images_mock = _mock_response(200, [])
|
||||
gen_videos_mock = _mock_response(200, [])
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[me_mock, images_mock, gen_images_mock, gen_videos_mock]):
|
||||
resp = client.get("/dashboard")
|
||||
assert resp.status_code == 200
|
||||
assert b"cat.png" in resp.data
|
||||
assert b"img-1" in resp.data
|
||||
|
||||
|
||||
def test_dashboard_shows_generated_images(client):
|
||||
_set_auth(client)
|
||||
me_mock = _mock_response(
|
||||
200, {"id": "1", "email": "u@example.com", "role": "user"})
|
||||
images_mock = _mock_response(200, [])
|
||||
gen_images_mock = _mock_response(200, [
|
||||
{
|
||||
"id": "gen-1",
|
||||
"model_id": "google/gemini-2.5-flash-image",
|
||||
"prompt": "A cat on the moon",
|
||||
"image_data": "data:image/png;base64,abc123",
|
||||
"created_at": "2026-04-29T10:00:00",
|
||||
}
|
||||
])
|
||||
gen_videos_mock = _mock_response(200, [])
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[me_mock, images_mock, gen_images_mock, gen_videos_mock]):
|
||||
resp = client.get("/dashboard")
|
||||
assert resp.status_code == 200
|
||||
assert b"Generated images" in resp.data
|
||||
assert b"A cat on the moon" in resp.data
|
||||
assert b"data:image/png;base64,abc123" in resp.data
|
||||
|
||||
|
||||
def test_dashboard_no_images_section_when_empty(client):
|
||||
_set_auth(client)
|
||||
me_mock = _mock_response(
|
||||
200, {"id": "1", "email": "u@example.com", "role": "user"})
|
||||
images_mock = _mock_response(200, [])
|
||||
gen_images_mock = _mock_response(200, [])
|
||||
gen_videos_mock = _mock_response(200, [])
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[me_mock, images_mock, gen_images_mock, gen_videos_mock]):
|
||||
resp = client.get("/dashboard")
|
||||
assert resp.status_code == 200
|
||||
assert b"Uploaded reference images" not in resp.data
|
||||
|
||||
|
||||
def test_serve_uploaded_image_proxy(client):
|
||||
_set_auth(client)
|
||||
img_bytes = b"\x89PNG\r\n\x1a\n"
|
||||
mock = MagicMock()
|
||||
mock.status_code = 200
|
||||
mock.content = img_bytes
|
||||
mock.headers = {"content-type": "image/png"}
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
resp = client.get("/images/img-1/file")
|
||||
assert resp.status_code == 200
|
||||
assert resp.content_type == "image/png"
|
||||
assert resp.data == img_bytes
|
||||
|
||||
|
||||
def test_serve_uploaded_image_requires_login(client):
|
||||
resp = client.get("/images/img-1/file")
|
||||
assert resp.status_code == 302
|
||||
assert "/login" in resp.headers["Location"]
|
||||
|
||||
|
||||
def test_serve_uploaded_image_not_found_proxied(client):
|
||||
_set_auth(client)
|
||||
mock = _mock_response(404, {"detail": "Image not found."})
|
||||
mock.content = b""
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
resp = client.get("/images/bad-id/file")
|
||||
assert resp.status_code == 404
|
||||
|
||||
|
||||
def test_generate_image_uploads_reference_then_generates(client):
|
||||
_set_auth(client)
|
||||
gen_mock = _mock_response(200, {
|
||||
"id": "g2", "model": "openai/dall-e-3",
|
||||
"images": [{"url": "https://example.com/out.png", "revised_prompt": None, "b64_json": None}]
|
||||
})
|
||||
# No file field → upload branch skipped; only generate call is made
|
||||
with patch("frontend.app.main.httpx.request", return_value=gen_mock):
|
||||
resp = client.post("/generate/image", data={
|
||||
"model": "openai/dall-e-3", "prompt": "A cat", "n": "1", "size": "1024x1024",
|
||||
}, content_type="multipart/form-data")
|
||||
assert resp.status_code == 200
|
||||
assert b"example.com/out.png" in resp.data
|
||||
@@ -0,0 +1,73 @@
|
||||
# Nginx reverse proxy configuration for Coolify deployment
|
||||
# Place this in /etc/nginx/conf.d/ai.allucanget.biz.conf or use Coolify's built-in proxy
|
||||
|
||||
# Backend API proxy
|
||||
upstream backend {
|
||||
server 127.0.0.1:12015;
|
||||
}
|
||||
|
||||
# Frontend proxy
|
||||
upstream frontend {
|
||||
server 127.0.0.1:12016;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name ai.allucanget.biz www.ai.allucanget.biz;
|
||||
|
||||
# Redirect HTTP to HTTPS
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name ai.allucanget.biz www.ai.allucanget.biz;
|
||||
|
||||
# SSL configuration (managed by Let's Encrypt / Certbot)
|
||||
ssl_certificate /etc/letsencrypt/live/ai.allucanget.biz/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/ai.allucanget.biz/privkey.pem;
|
||||
ssl_protocols TLSv1.2 TLSv1.3;
|
||||
ssl_ciphers HIGH:!aNULL:!MD5;
|
||||
|
||||
# Security headers
|
||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
|
||||
|
||||
# Backend API proxy
|
||||
location /api/ {
|
||||
proxy_pass http://backend;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
# WebSocket support (if needed in future)
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
}
|
||||
|
||||
# Frontend proxy
|
||||
location / {
|
||||
proxy_pass http://frontend;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
# Static files caching
|
||||
location ~* \.(css|js|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
|
||||
proxy_pass http://frontend;
|
||||
expires 30d;
|
||||
add_header Cache-Control "public, immutable";
|
||||
}
|
||||
}
|
||||
|
||||
# Health check endpoint
|
||||
location /health {
|
||||
proxy_pass http://backend;
|
||||
proxy_set_header Host $host;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,3 @@
|
||||
[tool.pytest.ini_options]
|
||||
asyncio_mode = "auto"
|
||||
testpaths = ["backend/tests", "frontend/tests"]
|
||||
@@ -0,0 +1,41 @@
|
||||
anyio
|
||||
bcrypt==4.0.1
|
||||
blinker
|
||||
certifi
|
||||
cffi
|
||||
cryptography
|
||||
dnspython
|
||||
duckdb
|
||||
ecdsa
|
||||
email-validator
|
||||
exceptiongroup
|
||||
fastapi
|
||||
Flask
|
||||
h11
|
||||
httpcore
|
||||
httpx
|
||||
idna
|
||||
iniconfig
|
||||
itsdangerous
|
||||
Jinja2
|
||||
MarkupSafe
|
||||
packaging
|
||||
passlib==1.7.4
|
||||
pluggy
|
||||
pyasn1
|
||||
pycparser
|
||||
pydantic
|
||||
pydantic_core
|
||||
Pygments
|
||||
pytest
|
||||
pytest-asyncio
|
||||
python-dotenv
|
||||
python-jose
|
||||
rsa
|
||||
six
|
||||
starlette
|
||||
tomli
|
||||
typing-inspection
|
||||
typing_extensions
|
||||
uvicorn
|
||||
Werkzeug
|
||||
Reference in New Issue
Block a user