Compare commits
45 Commits
e5b413c79d
..
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 02fc5995db | |||
| 299ad7d943 | |||
| 3d0a08a8ef | |||
| 2ca7ae538f | |||
| 37edef716a | |||
| d5a94947de | |||
| 615b842b03 | |||
| 998cc2e472 | |||
| 81c06ad13b | |||
| d1c2b6da68 | |||
| 0ae0e6e7fa | |||
| bdb7c7c43a | |||
| bd77d4c43e | |||
| cc96d26b08 | |||
| 8e36f48527 | |||
| df85676fa2 | |||
| 951a653dc9 | |||
| d4421616e5 | |||
| df24a07cd8 | |||
| 6ef8816736 | |||
| 472fe1cab8 | |||
| 712c556032 | |||
| 3d32e6df74 | |||
| 78b76dc331 | |||
| acc6991341 | |||
| 96d13fc440 | |||
| fe32c32726 | |||
| da3ae822f6 | |||
| e1d74fe163 | |||
| 20de65ad01 | |||
| 3224d16197 | |||
| 8871f136d4 | |||
| 52e95e3fe0 | |||
| abd61798c7 | |||
| d484721a94 | |||
| 87efca00df | |||
| fe41a8cbee | |||
| 5371bdce3b | |||
| 80999b3659 | |||
| 8f4d01d34d | |||
| 2c6fdc03a8 | |||
| ae7727c01a | |||
| 508f0e5d40 | |||
| f9ada784db | |||
| 8c5798db43 |
@@ -45,3 +45,8 @@ Thumbs.db
|
||||
|
||||
# instructions
|
||||
.github/instructions/
|
||||
|
||||
# Logs and generated data
|
||||
logs/
|
||||
data/
|
||||
backend/data/
|
||||
|
||||
@@ -1,7 +1,16 @@
|
||||
# AI
|
||||
# All You Can GET AI
|
||||
|
||||
A multi-modal AI web application. Users can choose between different AI models for text generation, text-to-image, text-to-video, and image-to-video generation, powered by [openrouter.ai](https://openrouter.ai).
|
||||
|
||||
Key features:
|
||||
|
||||
- Multi-modal AI generation (text, images, videos)
|
||||
- User authentication and role-based access control
|
||||
- Admin dashboard for managing users, models, and video jobs
|
||||
- Gallery for viewing generated images and videos
|
||||
- Chat interface with message history
|
||||
- Image upload and preview functionality
|
||||
|
||||
## Components
|
||||
|
||||
| Component | Technology | Description |
|
||||
@@ -14,7 +23,7 @@ A multi-modal AI web application. Users can choose between different AI models f
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Python 3.11+
|
||||
- Python 3.12+
|
||||
- An [openrouter.ai](https://openrouter.ai) API key
|
||||
|
||||
### Setup
|
||||
@@ -31,49 +40,98 @@ python -m venv .venv
|
||||
# Linux/macOS
|
||||
source .venv/bin/activate
|
||||
|
||||
# Install dependencies
|
||||
# Install core dependencies
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Copy and fill in environment variables
|
||||
# Install development dependencies
|
||||
pip install -r backend/requirements-dev.txt
|
||||
pip install -r frontend/requirements-dev.txt
|
||||
|
||||
# Copy environment variables file
|
||||
cp .env.example .env
|
||||
|
||||
# Edit .env file and add your OpenRouter API key and configure other settings
|
||||
nano .env
|
||||
```
|
||||
|
||||
### Running the backend
|
||||
### Running the application locally
|
||||
|
||||
#### Backend (FastAPI + Uvicorn)
|
||||
|
||||
```bash
|
||||
cd backend
|
||||
uvicorn app.main:app --reload --port 8000
|
||||
uvicorn app.main:app --reload --port 12015
|
||||
```
|
||||
|
||||
### Running the frontend
|
||||
#### Frontend (Flask)
|
||||
|
||||
```bash
|
||||
cd frontend
|
||||
flask --app app.main run --port 5000
|
||||
flask --app app.main run --port 12016 --debug
|
||||
```
|
||||
|
||||
### Running tests
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
pytest
|
||||
|
||||
# Run backend tests only
|
||||
pytest backend/tests/
|
||||
|
||||
# Run frontend tests only
|
||||
pytest frontend/tests/
|
||||
```
|
||||
|
||||
### Available Environment Variables
|
||||
|
||||
| Variable | Description | Default |
|
||||
| -------------------- | --------------------------- | ------------------- |
|
||||
| `OPENROUTER_API_KEY` | Your OpenRouter API key | _Required_ |
|
||||
| `ADMIN_EMAIL` | Default admin user email | `ai@allucanget.biz` |
|
||||
| `ADMIN_PASSWORD` | Default admin user password | `admin123` |
|
||||
| `DATABASE_URL` | DuckDB database path | `../data/app.db` |
|
||||
|
||||
## Default admin user
|
||||
|
||||
On first startup a default admin account is created:
|
||||
|
||||
| Field | Value |
|
||||
| -------- | ------------------- |
|
||||
| Email | `ai@allucanget.biz` |
|
||||
| Password | `admin123` |
|
||||
| Role | `admin` |
|
||||
|
||||
Override via environment variables `ADMIN_EMAIL` and `ADMIN_PASSWORD` before first run.
|
||||
|
||||
## Deployment
|
||||
|
||||
Deployed on [Coolify](https://coolify.io) using Nixpacks. See [docs/deployment/coolify.md](docs/deployment/coolify.md) for full instructions.
|
||||
|
||||
## Project Structure
|
||||
|
||||
```txt
|
||||
backend/ FastAPI backend
|
||||
app/
|
||||
routers/ API route handlers
|
||||
services/ Business logic
|
||||
models/ Pydantic models
|
||||
tests/
|
||||
__init__.py Package initialization
|
||||
db.py Database connection and operations
|
||||
dependencies.py Dependency injection
|
||||
main.py FastAPI application entrypoint
|
||||
models/ Pydantic and database models
|
||||
routers/ API route handlers (auth, users, admin, generate, gallery)
|
||||
services/ Business logic for AI generation, users, admin, etc.
|
||||
tests/ Backend test suite
|
||||
frontend/ Flask frontend
|
||||
app/
|
||||
__init__.py Package initialization
|
||||
main.py Flask application entrypoint
|
||||
templates/ Jinja2 HTML templates
|
||||
static/ CSS, JS, images
|
||||
tests/
|
||||
data/ DuckDB database files (gitignored)
|
||||
docs/ Architecture documentation
|
||||
tests/ Frontend test suite
|
||||
data/ DuckDB database files, uploaded media, and generated content
|
||||
logs/ Application logs
|
||||
docs/ Architecture documentation (arc42 template)
|
||||
nginx/ Nginx configuration for Coolify deployment
|
||||
```
|
||||
|
||||
## Documentation
|
||||
|
||||
@@ -0,0 +1,22 @@
|
||||
FROM python:3.12-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install system dependencies
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
gcc \
|
||||
curl \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy requirements and install Python dependencies
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy application code
|
||||
COPY . .
|
||||
|
||||
# Expose port
|
||||
EXPOSE 12015
|
||||
|
||||
# Run the application
|
||||
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "12015"]
|
||||
@@ -64,3 +64,86 @@ def _run_migrations(conn: duckdb.DuckDBPyConnection) -> None:
|
||||
revoked BOOLEAN DEFAULT false
|
||||
)
|
||||
""")
|
||||
conn.execute("""
|
||||
CREATE TABLE IF NOT EXISTS uploaded_images (
|
||||
id UUID DEFAULT uuid() PRIMARY KEY,
|
||||
user_id UUID NOT NULL,
|
||||
filename VARCHAR NOT NULL,
|
||||
content_type VARCHAR NOT NULL,
|
||||
file_path VARCHAR NOT NULL,
|
||||
size_bytes BIGINT NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT now()
|
||||
)
|
||||
""")
|
||||
conn.execute("""
|
||||
CREATE TABLE IF NOT EXISTS models_cache (
|
||||
id UUID DEFAULT uuid() PRIMARY KEY,
|
||||
model_id VARCHAR NOT NULL UNIQUE,
|
||||
name VARCHAR NOT NULL,
|
||||
modality VARCHAR NOT NULL,
|
||||
context_length BIGINT,
|
||||
pricing JSON,
|
||||
fetched_at TIMESTAMP NOT NULL
|
||||
)
|
||||
""")
|
||||
conn.execute("""
|
||||
CREATE TABLE IF NOT EXISTS generated_images (
|
||||
id UUID DEFAULT uuid() PRIMARY KEY,
|
||||
user_id UUID NOT NULL,
|
||||
model_id VARCHAR NOT NULL,
|
||||
prompt VARCHAR NOT NULL,
|
||||
image_data VARCHAR NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT now()
|
||||
)
|
||||
""")
|
||||
conn.execute("""
|
||||
CREATE TABLE IF NOT EXISTS generated_videos (
|
||||
id UUID DEFAULT uuid() PRIMARY KEY,
|
||||
user_id UUID NOT NULL,
|
||||
job_id VARCHAR NOT NULL,
|
||||
model_id VARCHAR NOT NULL,
|
||||
prompt VARCHAR NOT NULL,
|
||||
polling_url VARCHAR,
|
||||
status VARCHAR NOT NULL DEFAULT 'pending',
|
||||
video_url VARCHAR,
|
||||
created_at TIMESTAMP DEFAULT now(),
|
||||
updated_at TIMESTAMP DEFAULT now()
|
||||
)
|
||||
""")
|
||||
# Migration: add output_modalities column if absent (stores JSON array string)
|
||||
conn.execute("""
|
||||
ALTER TABLE models_cache ADD COLUMN IF NOT EXISTS output_modalities VARCHAR
|
||||
""")
|
||||
# Migration: add video job request params + generation type
|
||||
conn.execute("""
|
||||
ALTER TABLE generated_videos ADD COLUMN IF NOT EXISTS request_params VARCHAR
|
||||
""")
|
||||
conn.execute("""
|
||||
ALTER TABLE generated_videos ADD COLUMN IF NOT EXISTS generation_type VARCHAR DEFAULT 'text_to_video'
|
||||
""")
|
||||
conn.execute("""
|
||||
ALTER TABLE generated_videos ADD COLUMN IF NOT EXISTS error VARCHAR
|
||||
""")
|
||||
_seed_admin(conn)
|
||||
|
||||
|
||||
def _seed_admin(conn: duckdb.DuckDBPyConnection) -> None:
|
||||
"""Insert the default admin user if it doesn't already exist."""
|
||||
from passlib.context import CryptContext
|
||||
_pwd = CryptContext(schemes=["bcrypt"], deprecated="auto")
|
||||
|
||||
email = os.getenv("ADMIN_EMAIL", "ai@allucanget.biz")
|
||||
password = os.getenv("ADMIN_PASSWORD", "admin123")
|
||||
|
||||
existing = conn.execute(
|
||||
"SELECT id FROM users WHERE email = ?", [email]
|
||||
).fetchone()
|
||||
if existing is None:
|
||||
password_hash = _pwd.hash(password)
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO users (email, password_hash, role)
|
||||
VALUES (?, ?, 'admin')
|
||||
""",
|
||||
[email, password_hash],
|
||||
)
|
||||
|
||||
@@ -3,7 +3,7 @@ from fastapi import Depends, HTTPException, status
|
||||
from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer
|
||||
from jose import JWTError
|
||||
|
||||
from backend.app.services.auth import decode_token
|
||||
from .services.auth import decode_token
|
||||
|
||||
_bearer = HTTPBearer()
|
||||
|
||||
|
||||
+25
-13
@@ -1,9 +1,13 @@
|
||||
from backend.app.routers import auth as auth_router
|
||||
from backend.app.routers import users as users_router
|
||||
from backend.app.routers import admin as admin_router
|
||||
from backend.app.routers import ai as ai_router
|
||||
from backend.app.routers import generate as generate_router
|
||||
from backend.app.db import close_db, init_db
|
||||
from .routers import auth
|
||||
from .routers import users
|
||||
from .routers import admin
|
||||
from .routers import ai
|
||||
from .routers import generate
|
||||
from .routers import images
|
||||
from .routers import models
|
||||
from .db import close_db, get_conn, get_write_lock, init_db
|
||||
from .services.video_worker import run_worker
|
||||
import asyncio
|
||||
import os
|
||||
from contextlib import asynccontextmanager
|
||||
|
||||
@@ -17,12 +21,18 @@ load_dotenv()
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
init_db()
|
||||
worker_task = asyncio.create_task(run_worker(get_conn(), get_write_lock()))
|
||||
yield
|
||||
worker_task.cancel()
|
||||
try:
|
||||
await worker_task
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
close_db()
|
||||
|
||||
|
||||
app = FastAPI(
|
||||
title="AI Allucanget Biz API",
|
||||
title="All You Can GET AI Biz API",
|
||||
description="Multi-modal AI generation API powered by openrouter.ai",
|
||||
version="0.1.0",
|
||||
lifespan=lifespan,
|
||||
@@ -30,17 +40,19 @@ app = FastAPI(
|
||||
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=[os.getenv("CORS_ORIGINS", "http://localhost:5000")],
|
||||
allow_origins=[os.getenv("CORS_ORIGINS", "http://localhost:12016")],
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
app.include_router(auth_router.router)
|
||||
app.include_router(users_router.router)
|
||||
app.include_router(admin_router.router)
|
||||
app.include_router(ai_router.router)
|
||||
app.include_router(generate_router.router)
|
||||
app.include_router(auth.router)
|
||||
app.include_router(users.router)
|
||||
app.include_router(admin.router)
|
||||
app.include_router(ai.router)
|
||||
app.include_router(generate.router)
|
||||
app.include_router(images.router)
|
||||
app.include_router(models.router)
|
||||
|
||||
|
||||
@app.get("/health", tags=["health"])
|
||||
|
||||
@@ -33,8 +33,9 @@ class ModelInfo(BaseModel):
|
||||
|
||||
class TextRequest(BaseModel):
|
||||
model: str
|
||||
prompt: str
|
||||
prompt: str = ""
|
||||
system_prompt: str | None = None
|
||||
messages: list[ChatMessage] | None = None
|
||||
temperature: float = 0.7
|
||||
max_tokens: int = 1024
|
||||
|
||||
@@ -61,6 +62,7 @@ class ImageResult(BaseModel):
|
||||
url: str | None = None
|
||||
b64_json: str | None = None
|
||||
revised_prompt: str | None = None
|
||||
image_id: str | None = None # UUID of stored row in generated_images
|
||||
|
||||
|
||||
class ImageResponse(BaseModel):
|
||||
@@ -89,7 +91,8 @@ class VideoFromImageRequest(BaseModel):
|
||||
|
||||
|
||||
class VideoResponse(BaseModel):
|
||||
id: str
|
||||
id: str # This is the job_id from the provider
|
||||
db_id: str | None = None # This is the UUID from our generated_videos table
|
||||
model: str
|
||||
status: str # "queued" | "processing" | "completed" | "failed"
|
||||
polling_url: str | None = None
|
||||
|
||||
+187
-22
@@ -1,10 +1,13 @@
|
||||
"""Admin router: operational endpoints for application management."""
|
||||
from datetime import datetime, timezone
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from typing import Any
|
||||
|
||||
from fastapi import APIRouter, Depends
|
||||
|
||||
from backend.app.db import get_conn, get_write_lock
|
||||
from backend.app.dependencies import require_admin
|
||||
from ..db import get_conn, get_write_lock
|
||||
from ..dependencies import require_admin
|
||||
from ..services import models as models_service
|
||||
from ..services.models import mark_timed_out_video_jobs
|
||||
|
||||
router = APIRouter(prefix="/admin", tags=["admin"])
|
||||
|
||||
@@ -13,16 +16,23 @@ router = APIRouter(prefix="/admin", tags=["admin"])
|
||||
async def get_stats(_: dict = Depends(require_admin)) -> dict:
|
||||
"""Return aggregate statistics: user counts and token counts."""
|
||||
conn = get_conn()
|
||||
total_users = conn.execute("SELECT COUNT(*) FROM users").fetchone()[0]
|
||||
users_by_role = conn.execute(
|
||||
"SELECT role, COUNT(*) FROM users GROUP BY role ORDER BY role"
|
||||
).fetchall()
|
||||
total_tokens = conn.execute(
|
||||
"SELECT COUNT(*) FROM refresh_tokens").fetchone()[0]
|
||||
active_tokens = conn.execute(
|
||||
"SELECT COUNT(*) FROM refresh_tokens WHERE revoked = false AND expires_at > ?",
|
||||
[datetime.now(timezone.utc)],
|
||||
).fetchone()[0]
|
||||
sql_user_count = "SELECT COUNT(*) FROM users"
|
||||
sql_user_counts = "SELECT role, COUNT(*) FROM users GROUP BY role ORDER BY role"
|
||||
sql_token_count = "SELECT COUNT(*) FROM refresh_tokens"
|
||||
sql_tokens_active = "SELECT COUNT(*) FROM refresh_tokens WHERE revoked = false AND expires_at > ?"
|
||||
now = datetime.now(timezone.utc)
|
||||
|
||||
total_users_row = conn.execute(sql_user_count).fetchone()
|
||||
total_users = total_users_row[0] if total_users_row else 0
|
||||
|
||||
users_by_role = conn.execute(sql_user_counts).fetchall()
|
||||
|
||||
total_tokens_row = conn.execute(sql_token_count).fetchone()
|
||||
total_tokens = total_tokens_row[0] if total_tokens_row else 0
|
||||
|
||||
active_tokens_row = conn.execute(sql_tokens_active, [now]).fetchone()
|
||||
active_tokens = active_tokens_row[0] if active_tokens_row else 0
|
||||
|
||||
return {
|
||||
"users": {
|
||||
"total": total_users,
|
||||
@@ -40,7 +50,8 @@ async def get_stats(_: dict = Depends(require_admin)) -> dict:
|
||||
async def db_health(_: dict = Depends(require_admin)) -> dict:
|
||||
"""Verify DuckDB is reachable."""
|
||||
conn = get_conn()
|
||||
result = conn.execute("SELECT 1").fetchone()[0]
|
||||
result_row = conn.execute("SELECT 1").fetchone()
|
||||
result = result_row[0] if result_row else 0
|
||||
return {"status": "ok" if result == 1 else "error"}
|
||||
|
||||
|
||||
@@ -50,13 +61,167 @@ async def purge_tokens(_: dict = Depends(require_admin)) -> dict:
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
now = datetime.now(timezone.utc)
|
||||
sql_count = "SELECT COUNT(*) FROM refresh_tokens"
|
||||
sql_delete = "DELETE FROM refresh_tokens WHERE revoked = true OR expires_at <= ?"
|
||||
async with lock:
|
||||
before = conn.execute(
|
||||
"SELECT COUNT(*) FROM refresh_tokens").fetchone()[0]
|
||||
conn.execute(
|
||||
"DELETE FROM refresh_tokens WHERE revoked = true OR expires_at <= ?", [
|
||||
now]
|
||||
)
|
||||
after = conn.execute(
|
||||
"SELECT COUNT(*) FROM refresh_tokens").fetchone()[0]
|
||||
before_row = conn.execute(sql_count).fetchone()
|
||||
before = before_row[0] if before_row else 0
|
||||
|
||||
conn.execute(sql_delete, [now])
|
||||
|
||||
after_row = conn.execute(sql_count).fetchone()
|
||||
after = after_row[0] if after_row else 0
|
||||
|
||||
return {"deleted": before - after, "remaining": after}
|
||||
|
||||
|
||||
@router.get("/models/status")
|
||||
async def get_model_status(_: dict = Depends(require_admin)) -> dict[str, Any]:
|
||||
"""Return model cache status: last update time and model count."""
|
||||
conn = get_conn()
|
||||
return models_service.get_cache_status(conn)
|
||||
|
||||
|
||||
@router.get("/models")
|
||||
async def get_all_models(_: dict = Depends(require_admin)) -> list[dict[str, Any]]:
|
||||
"""Return all cached models."""
|
||||
conn = get_conn()
|
||||
return models_service.get_cached_models(conn)
|
||||
|
||||
|
||||
@router.post("/models/refresh", status_code=200)
|
||||
async def refresh_models(
|
||||
_: dict = Depends(require_admin),
|
||||
) -> dict[str, str | int | None]:
|
||||
"""Force a refresh of the model cache from OpenRouter."""
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
async with lock:
|
||||
count = await models_service.refresh_models_cache(conn)
|
||||
status = models_service.get_cache_status(conn)
|
||||
return {
|
||||
"status": "ok",
|
||||
"refreshed": count,
|
||||
"total_models": status.get("model_count"),
|
||||
"last_updated": status.get("last_updated"),
|
||||
}
|
||||
|
||||
|
||||
@router.get("/videos")
|
||||
async def admin_list_video_jobs(_: dict = Depends(require_admin)) -> list[dict[str, Any]]:
|
||||
"""Return all video generation jobs across all users."""
|
||||
conn = get_conn()
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT
|
||||
v.id, v.job_id, v.user_id, u.email, v.model_id, v.prompt,
|
||||
v.status, v.video_url, v.created_at, v.updated_at
|
||||
FROM generated_videos v
|
||||
LEFT JOIN users u ON v.user_id = u.id
|
||||
ORDER BY v.created_at DESC
|
||||
"""
|
||||
).fetchall()
|
||||
return [
|
||||
{
|
||||
"id": str(row[0]),
|
||||
"job_id": row[1],
|
||||
"user_id": str(row[2]),
|
||||
"user_email": row[3],
|
||||
"model_id": row[4],
|
||||
"prompt": row[5],
|
||||
"status": row[6],
|
||||
"video_url": row[7],
|
||||
"created_at": row[8].isoformat() if row[8] else None,
|
||||
"updated_at": row[9].isoformat() if row[9] else None,
|
||||
}
|
||||
for row in rows
|
||||
]
|
||||
|
||||
|
||||
@router.post("/videos/{job_id}/cancel", status_code=200)
|
||||
async def admin_cancel_video_job(job_id: str, _: dict = Depends(require_admin)) -> dict[str, str]:
|
||||
"""Mark a video job as 'cancelled'. Does not stop the provider job."""
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
now = datetime.now(timezone.utc)
|
||||
async with lock:
|
||||
conn.execute(
|
||||
"UPDATE generated_videos SET status = 'cancelled', updated_at = ? WHERE id = ?", [
|
||||
now, job_id]
|
||||
)
|
||||
return {"status": "ok", "job_id": job_id}
|
||||
|
||||
|
||||
@router.post("/videos/purge", status_code=200)
|
||||
async def admin_purge_video_jobs(_: dict = Depends(require_admin)) -> dict[str, Any]:
|
||||
"""Delete all completed, failed, or cancelled jobs older than 30 days."""
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
thirty_days_ago = datetime.now(
|
||||
timezone.utc) - timedelta(days=30)
|
||||
|
||||
sql_count = "SELECT COUNT(*) FROM generated_videos"
|
||||
sql_delete = """
|
||||
DELETE FROM generated_videos
|
||||
WHERE status IN ('completed', 'failed', 'cancelled')
|
||||
AND updated_at < ?
|
||||
"""
|
||||
|
||||
async with lock:
|
||||
before_row = conn.execute(sql_count).fetchone()
|
||||
before = before_row[0] if before_row else 0
|
||||
|
||||
conn.execute(sql_delete, [thirty_days_ago])
|
||||
|
||||
after_row = conn.execute(sql_count).fetchone()
|
||||
after = after_row[0] if after_row else 0
|
||||
|
||||
return {"deleted": before - after, "remaining": after}
|
||||
|
||||
|
||||
@router.post("/videos/timed-out", status_code=200)
|
||||
async def admin_mark_timed_out(_: dict = Depends(require_admin)) -> dict[str, int]:
|
||||
"""Mark video jobs that have been in 'queued' or 'processing' status for too long as 'failed'."""
|
||||
conn = get_conn()
|
||||
count = mark_timed_out_video_jobs(conn, timeout_minutes=120)
|
||||
return {"timed_out": count}
|
||||
|
||||
|
||||
@router.post("/videos/{job_id}/retry", status_code=200)
|
||||
async def admin_retry_video_job(job_id: str, _: dict = Depends(require_admin)) -> dict[str, str]:
|
||||
"""Reset a failed or cancelled video job back to 'queued' for reprocessing."""
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
now = datetime.now(timezone.utc)
|
||||
async with lock:
|
||||
row = conn.execute(
|
||||
"SELECT status FROM generated_videos WHERE id = ?", [job_id]
|
||||
).fetchone()
|
||||
if row is None:
|
||||
from fastapi import HTTPException
|
||||
raise HTTPException(status_code=404, detail="Job not found")
|
||||
if row[0] not in ("failed", "cancelled"):
|
||||
from fastapi import HTTPException
|
||||
raise HTTPException(
|
||||
status_code=400, detail=f"Cannot retry job with status '{row[0]}'")
|
||||
conn.execute(
|
||||
"UPDATE generated_videos SET status = 'queued', updated_at = ? WHERE id = ?",
|
||||
[now, job_id],
|
||||
)
|
||||
return {"status": "ok", "job_id": job_id}
|
||||
|
||||
|
||||
@router.delete("/videos/{job_id}", status_code=200)
|
||||
async def admin_delete_video_job(job_id: str, _: dict = Depends(require_admin)) -> dict[str, str]:
|
||||
"""Permanently delete a video job record."""
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
async with lock:
|
||||
row = conn.execute(
|
||||
"SELECT id FROM generated_videos WHERE id = ?", [job_id]
|
||||
).fetchone()
|
||||
if row is None:
|
||||
from fastapi import HTTPException
|
||||
raise HTTPException(status_code=404, detail="Job not found")
|
||||
conn.execute("DELETE FROM generated_videos WHERE id = ?", [job_id])
|
||||
return {"status": "ok", "job_id": job_id}
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
"""AI router: model listing and chat completions via OpenRouter."""
|
||||
from fastapi import APIRouter, Depends, HTTPException, status
|
||||
|
||||
from backend.app.dependencies import get_current_user
|
||||
from backend.app.models.ai import ChatRequest, ChatResponse, ModelInfo
|
||||
from backend.app.services import openrouter
|
||||
from ..dependencies import get_current_user
|
||||
from ..models.ai import ChatRequest, ChatResponse, ModelInfo
|
||||
from ..services import openrouter
|
||||
|
||||
router = APIRouter(prefix="/ai", tags=["ai"])
|
||||
|
||||
|
||||
@@ -4,8 +4,8 @@ import uuid
|
||||
from fastapi import APIRouter, HTTPException, status
|
||||
from jose import JWTError
|
||||
|
||||
from backend.app.models.auth import LoginRequest, RefreshRequest, RegisterRequest, TokenResponse
|
||||
from backend.app.services.auth import (
|
||||
from ..models.auth import LoginRequest, RefreshRequest, RegisterRequest, TokenResponse
|
||||
from ..services.auth import (
|
||||
authenticate_user,
|
||||
create_access_token,
|
||||
create_refresh_token,
|
||||
@@ -24,7 +24,8 @@ async def register(body: RegisterRequest) -> dict:
|
||||
try:
|
||||
user = await register_user(body.email, body.password)
|
||||
except ValueError as exc:
|
||||
raise HTTPException(status_code=status.HTTP_409_CONFLICT, detail=str(exc))
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_409_CONFLICT, detail=str(exc))
|
||||
return {"id": user["id"], "email": user["email"], "role": user["role"]}
|
||||
|
||||
|
||||
@@ -40,7 +41,8 @@ async def login(body: LoginRequest) -> TokenResponse:
|
||||
jti = str(uuid.uuid4())
|
||||
await store_refresh_token(user["id"], jti)
|
||||
return TokenResponse(
|
||||
access_token=create_access_token(user["id"], user["email"], user["role"]),
|
||||
access_token=create_access_token(
|
||||
user["id"], user["email"], user["role"]),
|
||||
refresh_token=create_refresh_token(user["id"], jti),
|
||||
)
|
||||
|
||||
@@ -71,11 +73,10 @@ async def refresh(body: RefreshRequest) -> TokenResponse:
|
||||
new_jti = str(uuid.uuid4())
|
||||
await store_refresh_token(user_id, new_jti)
|
||||
|
||||
from backend.app.db import get_conn
|
||||
from ..db import get_conn
|
||||
conn = get_conn()
|
||||
row = conn.execute(
|
||||
"SELECT email, role FROM users WHERE id = ?", [user_id]
|
||||
).fetchone()
|
||||
sql_fetch = "SELECT email, role FROM users WHERE id = ?"
|
||||
row = conn.execute(sql_fetch, [user_id]).fetchone()
|
||||
if row is None:
|
||||
raise credentials_error
|
||||
|
||||
|
||||
+277
-88
@@ -1,8 +1,13 @@
|
||||
"""Generate router: text, image, video, and image-to-video generation."""
|
||||
import json
|
||||
from datetime import datetime, timezone
|
||||
|
||||
import httpx
|
||||
from fastapi import APIRouter, Depends, HTTPException, status
|
||||
|
||||
from backend.app.dependencies import get_current_user
|
||||
from backend.app.models.ai import (
|
||||
from ..db import get_conn, get_write_lock
|
||||
from ..dependencies import get_current_user
|
||||
from ..models.ai import (
|
||||
ImageRequest,
|
||||
ImageResponse,
|
||||
ImageResult,
|
||||
@@ -12,7 +17,8 @@ from backend.app.models.ai import (
|
||||
VideoRequest,
|
||||
VideoResponse,
|
||||
)
|
||||
from backend.app.services import openrouter
|
||||
from ..services import openrouter
|
||||
from ..services.models import get_model_output_modalities
|
||||
|
||||
router = APIRouter(prefix="/generate", tags=["generate"])
|
||||
|
||||
@@ -23,6 +29,13 @@ async def generate_text(
|
||||
_: dict = Depends(get_current_user),
|
||||
) -> TextResponse:
|
||||
"""Generate text from a prompt using a chat model."""
|
||||
if body.messages:
|
||||
messages = [{"role": m.role, "content": m.content}
|
||||
for m in body.messages]
|
||||
if body.system_prompt and (not messages or messages[0]["role"] != "system"):
|
||||
messages.insert(
|
||||
0, {"role": "system", "content": body.system_prompt})
|
||||
else:
|
||||
messages = []
|
||||
if body.system_prompt:
|
||||
messages.append({"role": "system", "content": body.system_prompt})
|
||||
@@ -55,160 +68,336 @@ async def generate_text(
|
||||
@router.post("/image", response_model=ImageResponse)
|
||||
async def generate_image(
|
||||
body: ImageRequest,
|
||||
_: dict = Depends(get_current_user),
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> ImageResponse:
|
||||
"""Generate images from a text prompt."""
|
||||
# Detect if model uses chat completions (FLUX, GPT-5 Image Mini) vs /images/generations (DALL-E)
|
||||
chat_models = {"black-forest-labs/flux.2-klein-4b",
|
||||
"openai/gpt-5-image-mini"}
|
||||
is_chat_model = body.model.lower() in {m.lower() for m in chat_models} or \
|
||||
any(m in body.model.lower() for m in ["flux", "gpt-5-image-mini"])
|
||||
"""Generate images from a prompt using the chat completions endpoint.
|
||||
|
||||
All OpenRouter image models use /chat/completions with a modalities param.
|
||||
Models that output only images use ["image"]; those that also output text
|
||||
use ["image", "text"]. We look this up from the model cache; default to
|
||||
["image", "text"] when the model is not yet cached.
|
||||
"""
|
||||
# Determine modalities from cache; default ["image", "text"] works for most models
|
||||
try:
|
||||
if is_chat_model:
|
||||
image_config = {}
|
||||
conn = get_conn()
|
||||
cached_modalities = get_model_output_modalities(conn, body.model)
|
||||
except Exception:
|
||||
cached_modalities = []
|
||||
|
||||
if cached_modalities:
|
||||
# If cache says model only outputs image (no text), use ["image"]
|
||||
modalities = ["image"] if set(cached_modalities) == {
|
||||
"image"} else ["image", "text"]
|
||||
else:
|
||||
# Safe default: ["image", "text"]; works for Gemini, GPT-image etc.
|
||||
# For image-only models that fail with this, the error surfaces to the user.
|
||||
modalities = ["image", "text"]
|
||||
|
||||
image_config: dict = {}
|
||||
if body.aspect_ratio:
|
||||
image_config["aspect_ratio"] = body.aspect_ratio
|
||||
if body.image_size:
|
||||
image_config["image_size"] = body.image_size
|
||||
|
||||
try:
|
||||
result = await openrouter.generate_image_chat(
|
||||
model=body.model,
|
||||
prompt=body.prompt,
|
||||
modalities=[
|
||||
"image", "text"] if "gpt-5-image-mini" in body.model.lower() else ["image"],
|
||||
modalities=modalities,
|
||||
image_config=image_config if image_config else None,
|
||||
)
|
||||
else:
|
||||
result = await openrouter.generate_image(
|
||||
model=body.model,
|
||||
prompt=body.prompt,
|
||||
n=body.n,
|
||||
size=body.size,
|
||||
)
|
||||
except Exception as exc:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_502_BAD_GATEWAY, detail=f"OpenRouter error: {exc}")
|
||||
|
||||
try:
|
||||
if is_chat_model:
|
||||
# Chat completions response: choices[0].message.images[].image_url.url
|
||||
images = []
|
||||
message = result.get("choices", [{}])[0].get("message", {})
|
||||
images = []
|
||||
for item in message.get("images", []):
|
||||
img_url = item.get("image_url", {}).get("url")
|
||||
images.append(ImageResult(
|
||||
url=img_url,
|
||||
b64_json=None,
|
||||
revised_prompt=message.get("content"),
|
||||
revised_prompt=message.get("content") or None,
|
||||
))
|
||||
return ImageResponse(
|
||||
id=result.get("id", ""),
|
||||
model=result.get("model", body.model),
|
||||
images=images,
|
||||
if not images:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_502_BAD_GATEWAY,
|
||||
detail="No images returned by model. Verify the model supports image generation.",
|
||||
)
|
||||
|
||||
# Persist each image to DB
|
||||
user_id = current_user.get("id") or current_user.get("sub")
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
stored: list[ImageResult] = []
|
||||
sql_insert = "INSERT INTO generated_images (user_id, model_id, prompt, image_data, created_at) VALUES (?, ?, ?, ?, ?) RETURNING id"
|
||||
async with get_write_lock():
|
||||
conn = get_conn()
|
||||
for img in images:
|
||||
if img.url:
|
||||
row = conn.execute(
|
||||
sql_insert, [user_id, body.model, body.prompt, img.url, now],).fetchone()
|
||||
image_id = str(row[0]) if row else None
|
||||
else:
|
||||
# /images/generations response: data[].url
|
||||
images = [
|
||||
ImageResult(
|
||||
url=item.get("url"),
|
||||
b64_json=item.get("b64_json"),
|
||||
revised_prompt=item.get("revised_prompt"),
|
||||
)
|
||||
for item in result.get("data", [])
|
||||
]
|
||||
image_id = None
|
||||
stored.append(ImageResult(
|
||||
url=img.url,
|
||||
b64_json=img.b64_json,
|
||||
revised_prompt=img.revised_prompt,
|
||||
image_id=image_id,
|
||||
))
|
||||
|
||||
return ImageResponse(
|
||||
id=result.get("id", ""),
|
||||
model=result.get("model", body.model),
|
||||
images=images,
|
||||
images=stored,
|
||||
)
|
||||
except HTTPException:
|
||||
raise
|
||||
except (KeyError, TypeError) as exc:
|
||||
raise HTTPException(status_code=status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"Unexpected response format: {exc}")
|
||||
|
||||
|
||||
@router.get("/images")
|
||||
async def list_generated_images(
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> list[dict]:
|
||||
"""Return all generated images for the current user, newest first."""
|
||||
user_id = current_user.get("id") or current_user.get("sub")
|
||||
conn = get_conn()
|
||||
sql_fetch = "SELECT id, model_id, prompt, image_data, created_at FROM generated_images WHERE user_id = ? ORDER BY created_at DESC"
|
||||
rows = conn.execute(sql_fetch, [user_id]).fetchall()
|
||||
return [
|
||||
{
|
||||
"id": str(r[0]),
|
||||
"model_id": r[1],
|
||||
"prompt": r[2],
|
||||
"image_data": r[3],
|
||||
"created_at": r[4].isoformat() if r[4] else None,
|
||||
}
|
||||
for r in rows
|
||||
]
|
||||
|
||||
|
||||
@router.get("/images/{image_id}")
|
||||
async def get_generated_image(
|
||||
image_id: str,
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> dict:
|
||||
"""Return details for a single generated image."""
|
||||
user_id = current_user.get("id") or current_user.get("sub")
|
||||
conn = get_conn()
|
||||
row = conn.execute(
|
||||
"""SELECT id, model_id, prompt, image_data, created_at
|
||||
FROM generated_images
|
||||
WHERE id = ? AND user_id = ?""",
|
||||
[image_id, user_id],
|
||||
).fetchone()
|
||||
if not row:
|
||||
raise HTTPException(status_code=404, detail="Image not found")
|
||||
return {
|
||||
"id": str(row[0]),
|
||||
"model_id": row[1],
|
||||
"prompt": row[2],
|
||||
"image_data": row[3],
|
||||
"created_at": row[4].isoformat() if row[4] else None,
|
||||
}
|
||||
|
||||
|
||||
@router.post("/video", response_model=VideoResponse)
|
||||
async def generate_video(
|
||||
body: VideoRequest,
|
||||
_: dict = Depends(get_current_user),
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> VideoResponse:
|
||||
"""Generate a video from a text prompt."""
|
||||
try:
|
||||
result = await openrouter.generate_video(
|
||||
model=body.model,
|
||||
prompt=body.prompt,
|
||||
duration_seconds=body.duration_seconds,
|
||||
aspect_ratio=body.aspect_ratio,
|
||||
resolution=body.resolution,
|
||||
)
|
||||
except Exception as exc:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_502_BAD_GATEWAY, detail=f"OpenRouter error: {exc}")
|
||||
|
||||
urls = result.get("unsigned_urls") or result.get("video_urls")
|
||||
"""Queue a text-to-video generation job for background processing."""
|
||||
user_id = current_user.get("id") or current_user.get("sub")
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
request_params = json.dumps({
|
||||
"model": body.model,
|
||||
"prompt": body.prompt,
|
||||
"duration_seconds": body.duration_seconds,
|
||||
"aspect_ratio": body.aspect_ratio,
|
||||
"resolution": body.resolution,
|
||||
})
|
||||
db_id = None
|
||||
async with get_write_lock():
|
||||
conn = get_conn()
|
||||
row = conn.execute(
|
||||
"""INSERT INTO generated_videos
|
||||
(user_id, job_id, model_id, prompt, status, request_params, generation_type, created_at, updated_at)
|
||||
VALUES (?, ?, ?, ?, 'queued', ?, 'text_to_video', ?, ?) RETURNING id""",
|
||||
[user_id, "", body.model, body.prompt, request_params, now, now],
|
||||
).fetchone()
|
||||
if row:
|
||||
db_id = str(row[0])
|
||||
return VideoResponse(
|
||||
id=result.get("id", ""),
|
||||
id="",
|
||||
db_id=db_id,
|
||||
model=body.model,
|
||||
status=result.get("status", "queued"),
|
||||
polling_url=result.get("polling_url"),
|
||||
video_urls=urls,
|
||||
video_url=(urls or [None])[0],
|
||||
error=result.get("error"),
|
||||
metadata=result.get("metadata"),
|
||||
status="queued",
|
||||
)
|
||||
|
||||
|
||||
@router.post("/video/from-image", response_model=VideoResponse)
|
||||
async def generate_video_from_image(
|
||||
body: VideoFromImageRequest,
|
||||
_: dict = Depends(get_current_user),
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> VideoResponse:
|
||||
"""Generate a video from an image and a text prompt."""
|
||||
try:
|
||||
result = await openrouter.generate_video_from_image(
|
||||
model=body.model,
|
||||
image_url=body.image_url,
|
||||
prompt=body.prompt,
|
||||
duration_seconds=body.duration_seconds,
|
||||
aspect_ratio=body.aspect_ratio,
|
||||
resolution=body.resolution,
|
||||
)
|
||||
except Exception as exc:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_502_BAD_GATEWAY, detail=f"OpenRouter error: {exc}")
|
||||
|
||||
urls = result.get("unsigned_urls") or result.get("video_urls")
|
||||
"""Queue an image-to-video generation job for background processing."""
|
||||
user_id = current_user.get("id") or current_user.get("sub")
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
request_params = json.dumps({
|
||||
"model": body.model,
|
||||
"image_url": body.image_url,
|
||||
"prompt": body.prompt,
|
||||
"duration_seconds": body.duration_seconds,
|
||||
"aspect_ratio": body.aspect_ratio,
|
||||
"resolution": body.resolution,
|
||||
})
|
||||
db_id = None
|
||||
async with get_write_lock():
|
||||
conn = get_conn()
|
||||
row = conn.execute(
|
||||
"""INSERT INTO generated_videos
|
||||
(user_id, job_id, model_id, prompt, status, request_params, generation_type, created_at, updated_at)
|
||||
VALUES (?, ?, ?, ?, 'queued', ?, 'image_to_video', ?, ?) RETURNING id""",
|
||||
[user_id, "", body.model, body.prompt, request_params, now, now],
|
||||
).fetchone()
|
||||
if row:
|
||||
db_id = str(row[0])
|
||||
return VideoResponse(
|
||||
id=result.get("id", ""),
|
||||
id="",
|
||||
db_id=db_id,
|
||||
model=body.model,
|
||||
status=result.get("status", "queued"),
|
||||
polling_url=result.get("polling_url"),
|
||||
video_urls=urls,
|
||||
video_url=(urls or [None])[0],
|
||||
error=result.get("error"),
|
||||
metadata=result.get("metadata"),
|
||||
status="queued",
|
||||
)
|
||||
|
||||
|
||||
@router.get("/video/status", response_model=VideoResponse)
|
||||
async def poll_video_status(
|
||||
polling_url: str,
|
||||
_: dict = Depends(get_current_user),
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> VideoResponse:
|
||||
"""Poll the status of a video generation job via its polling_url."""
|
||||
"""Poll status of a video generation job; updates DB row when completed/failed."""
|
||||
try:
|
||||
result = await openrouter.poll_video_status(polling_url)
|
||||
except Exception as exc:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_502_BAD_GATEWAY, detail=f"OpenRouter error: {exc}")
|
||||
|
||||
job_status = result.get("status", "processing")
|
||||
urls = result.get("unsigned_urls") or result.get("video_urls")
|
||||
video_url = (urls or [None])[0]
|
||||
|
||||
# Update DB row for this job when terminal state reached
|
||||
if job_status in ("completed", "failed"):
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
async with get_write_lock():
|
||||
conn = get_conn()
|
||||
conn.execute(
|
||||
"""UPDATE generated_videos
|
||||
SET status = ?, video_url = ?, updated_at = ?
|
||||
WHERE job_id = ?""",
|
||||
[job_status, video_url, now, result.get("id", "")],
|
||||
)
|
||||
|
||||
return VideoResponse(
|
||||
id=result.get("id", ""),
|
||||
model=result.get("model", ""),
|
||||
status=result.get("status", "processing"),
|
||||
status=job_status,
|
||||
polling_url=result.get("polling_url"),
|
||||
video_urls=urls,
|
||||
video_url=(urls or [None])[0],
|
||||
video_url=video_url,
|
||||
error=result.get("error"),
|
||||
metadata=result.get("metadata"),
|
||||
)
|
||||
|
||||
|
||||
@router.get("/videos")
|
||||
async def list_generated_videos(
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> list[dict]:
|
||||
"""Return all generated video jobs for the current user, newest first."""
|
||||
user_id = current_user.get("id") or current_user.get("sub")
|
||||
conn = get_conn()
|
||||
rows = conn.execute(
|
||||
"""SELECT id, job_id, model_id, prompt, polling_url, status, video_url, error, created_at
|
||||
FROM generated_videos
|
||||
WHERE user_id = ?
|
||||
ORDER BY created_at DESC""",
|
||||
[user_id],
|
||||
).fetchall()
|
||||
return [
|
||||
{
|
||||
"id": str(r[0]),
|
||||
"job_id": r[1],
|
||||
"model_id": r[2],
|
||||
"prompt": r[3],
|
||||
"polling_url": r[4],
|
||||
"status": r[5],
|
||||
"video_url": r[6],
|
||||
"error": r[7],
|
||||
"created_at": r[8].isoformat() if r[8] else None,
|
||||
}
|
||||
for r in rows
|
||||
]
|
||||
|
||||
|
||||
@router.get("/videos/{video_id}")
|
||||
async def get_generated_video(
|
||||
video_id: str,
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> dict:
|
||||
"""Return details for a single video generation job."""
|
||||
user_id = current_user.get("id") or current_user.get("sub")
|
||||
conn = get_conn()
|
||||
row = conn.execute(
|
||||
"""SELECT id, job_id, model_id, prompt, polling_url, status, video_url, error, created_at, updated_at
|
||||
FROM generated_videos
|
||||
WHERE id = ? AND user_id = ?""",
|
||||
[video_id, user_id],
|
||||
).fetchone()
|
||||
if not row:
|
||||
raise HTTPException(status_code=404, detail="Video job not found")
|
||||
return {
|
||||
"id": str(row[0]),
|
||||
"job_id": row[1],
|
||||
"model_id": row[2],
|
||||
"prompt": row[3],
|
||||
"polling_url": row[4],
|
||||
"status": row[5],
|
||||
"video_url": row[6],
|
||||
"error": row[7],
|
||||
"created_at": row[8].isoformat() if row[8] else None,
|
||||
"updated_at": row[9].isoformat() if row[9] else None,
|
||||
}
|
||||
|
||||
|
||||
@router.post("/videos/{video_id}/cancel", status_code=200)
|
||||
async def cancel_video_job(
|
||||
video_id: str,
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> dict[str, str]:
|
||||
"""Mark a video job as 'cancelled' if it belongs to the current user and is not terminal."""
|
||||
user_id = current_user.get("id") or current_user.get("sub")
|
||||
conn = get_conn()
|
||||
row = conn.execute(
|
||||
"SELECT status FROM generated_videos WHERE id = ? AND user_id = ?",
|
||||
[video_id, user_id],
|
||||
).fetchone()
|
||||
if not row:
|
||||
raise HTTPException(status_code=404, detail="Video job not found")
|
||||
job_status = row[0]
|
||||
if job_status in ("completed", "failed", "cancelled"):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail=f"Cannot cancel job with status '{job_status}'",
|
||||
)
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
async with get_write_lock():
|
||||
conn.execute(
|
||||
"UPDATE generated_videos SET status = 'cancelled', updated_at = ? WHERE id = ?",
|
||||
[now, video_id],
|
||||
)
|
||||
return {"status": "ok", "job_id": video_id}
|
||||
|
||||
@@ -0,0 +1,150 @@
|
||||
"""Images router: upload reference images and list user's uploads."""
|
||||
import os
|
||||
import uuid
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, UploadFile, status
|
||||
from fastapi.responses import FileResponse
|
||||
|
||||
from ..db import get_conn, get_write_lock
|
||||
from ..dependencies import get_current_user
|
||||
|
||||
router = APIRouter(prefix="/images", tags=["images"])
|
||||
|
||||
UPLOAD_DIR = os.getenv("UPLOAD_DIR", "data/uploads")
|
||||
MAX_SIZE_BYTES = 10 * 1024 * 1024 # 10 MB
|
||||
ALLOWED_CONTENT_TYPES = {"image/jpeg", "image/png", "image/webp", "image/gif"}
|
||||
|
||||
|
||||
@router.post("/upload", status_code=status.HTTP_201_CREATED)
|
||||
async def upload_image(
|
||||
file: UploadFile,
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> dict:
|
||||
"""Upload a reference image and store metadata in DuckDB."""
|
||||
if file.content_type not in ALLOWED_CONTENT_TYPES:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_415_UNSUPPORTED_MEDIA_TYPE,
|
||||
detail=f"Unsupported content type '{file.content_type}'. Allowed: {sorted(ALLOWED_CONTENT_TYPES)}",
|
||||
)
|
||||
|
||||
data = await file.read()
|
||||
if len(data) > MAX_SIZE_BYTES:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_413_REQUEST_ENTITY_TOO_LARGE,
|
||||
detail=f"File exceeds maximum allowed size of {MAX_SIZE_BYTES // (1024*1024)} MB.",
|
||||
)
|
||||
|
||||
user_id = current_user["id"]
|
||||
image_id = str(uuid.uuid4())
|
||||
ext = (file.filename or "").rsplit(
|
||||
".", 1)[-1].lower() if "." in (file.filename or "") else "bin"
|
||||
safe_filename = f"{image_id}.{ext}"
|
||||
user_dir = os.path.join(UPLOAD_DIR, user_id)
|
||||
os.makedirs(user_dir, exist_ok=True)
|
||||
file_path = os.path.join(user_dir, safe_filename)
|
||||
|
||||
with open(file_path, "wb") as f:
|
||||
f.write(data)
|
||||
|
||||
async with get_write_lock():
|
||||
conn = get_conn()
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO uploaded_images (id, user_id, filename, content_type, file_path, size_bytes)
|
||||
VALUES (?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
[image_id, user_id, file.filename or safe_filename,
|
||||
file.content_type, file_path, len(data)],
|
||||
)
|
||||
|
||||
return {
|
||||
"id": image_id,
|
||||
"filename": file.filename or safe_filename,
|
||||
"content_type": file.content_type,
|
||||
"size_bytes": len(data),
|
||||
}
|
||||
|
||||
|
||||
@router.get("/", status_code=status.HTTP_200_OK)
|
||||
async def list_images(
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> list[dict]:
|
||||
"""Return all uploaded images for the current user."""
|
||||
conn = get_conn()
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT id, filename, content_type, size_bytes, created_at
|
||||
FROM uploaded_images
|
||||
WHERE user_id = ?
|
||||
ORDER BY created_at DESC
|
||||
""",
|
||||
[current_user["id"]],
|
||||
).fetchall()
|
||||
|
||||
return [
|
||||
{
|
||||
"id": str(row[0]),
|
||||
"filename": row[1],
|
||||
"content_type": row[2],
|
||||
"size_bytes": row[3],
|
||||
"created_at": row[4].isoformat() if row[4] else None,
|
||||
}
|
||||
for row in rows
|
||||
]
|
||||
|
||||
|
||||
@router.get("/{image_id}", status_code=status.HTTP_200_OK)
|
||||
async def get_image_details(
|
||||
image_id: str,
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> dict:
|
||||
"""Return metadata for a single uploaded image."""
|
||||
conn = get_conn()
|
||||
row = conn.execute(
|
||||
"""
|
||||
SELECT id, filename, content_type, size_bytes, created_at
|
||||
FROM uploaded_images
|
||||
WHERE id = ? AND user_id = ?
|
||||
""",
|
||||
[image_id, current_user["id"]],
|
||||
).fetchone()
|
||||
|
||||
if not row:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND, detail="Image not found"
|
||||
)
|
||||
|
||||
return {
|
||||
"id": str(row[0]),
|
||||
"filename": row[1],
|
||||
"content_type": row[2],
|
||||
"size_bytes": row[3],
|
||||
"created_at": row[4].isoformat() if row[4] else None,
|
||||
}
|
||||
|
||||
|
||||
@router.get("/{image_id}/file", status_code=status.HTTP_200_OK)
|
||||
async def serve_image(
|
||||
image_id: str,
|
||||
current_user: dict = Depends(get_current_user),
|
||||
) -> FileResponse:
|
||||
"""Serve the raw image file. Only accessible by the owning user."""
|
||||
conn = get_conn()
|
||||
row = conn.execute(
|
||||
"SELECT file_path, content_type, user_id FROM uploaded_images WHERE id = ?",
|
||||
[image_id],
|
||||
).fetchone()
|
||||
|
||||
if row is None:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND, detail="Image not found.")
|
||||
if str(row[2]) != current_user["id"]:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN, detail="Access denied.")
|
||||
|
||||
file_path: str = row[0]
|
||||
if not os.path.isfile(file_path):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND, detail="Image file missing.")
|
||||
|
||||
return FileResponse(file_path, media_type=row[1])
|
||||
@@ -0,0 +1,47 @@
|
||||
"""Models router: list and refresh the OpenRouter model cache."""
|
||||
from fastapi import APIRouter, Depends, HTTPException, Query, status
|
||||
|
||||
from ..db import get_conn, get_write_lock
|
||||
from ..dependencies import get_current_user, require_admin
|
||||
from ..services import models as models_service
|
||||
|
||||
router = APIRouter(prefix="/models", tags=["models"])
|
||||
|
||||
|
||||
@router.get("/")
|
||||
async def list_models(
|
||||
modality: str | None = Query(
|
||||
None,
|
||||
description="Filter by output modality: text, image, video, audio",
|
||||
),
|
||||
_: dict = Depends(get_current_user),
|
||||
):
|
||||
"""Return cached models. Auto-refreshes cache if stale (older than 24 h)."""
|
||||
conn = get_conn()
|
||||
if models_service.is_cache_stale(conn):
|
||||
async with get_write_lock():
|
||||
# Re-check inside lock to avoid redundant parallel refreshes
|
||||
if models_service.is_cache_stale(conn):
|
||||
try:
|
||||
await models_service.refresh_models_cache(conn)
|
||||
except Exception as exc:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"Failed to refresh model cache: {exc}",
|
||||
)
|
||||
return models_service.get_cached_models(conn, modality)
|
||||
|
||||
|
||||
@router.post("/refresh", status_code=200)
|
||||
async def refresh_models(_: dict = Depends(require_admin)):
|
||||
"""Force-refresh the model cache from OpenRouter. Admin only."""
|
||||
conn = get_conn()
|
||||
async with get_write_lock():
|
||||
try:
|
||||
count = await models_service.refresh_models_cache(conn)
|
||||
except Exception as exc:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"OpenRouter error: {exc}",
|
||||
)
|
||||
return {"refreshed": count}
|
||||
@@ -1,9 +1,9 @@
|
||||
"""Users router: self-service profile and admin user management."""
|
||||
from fastapi import APIRouter, Depends, HTTPException, status
|
||||
|
||||
from backend.app.dependencies import get_current_user, require_admin
|
||||
from backend.app.models.users import SetRoleRequest, UpdateUserRequest, UserResponse
|
||||
from backend.app.services.users import (
|
||||
from ..dependencies import get_current_user, require_admin
|
||||
from ..models.users import SetRoleRequest, UpdateUserRequest, UserResponse
|
||||
from ..services.users import (
|
||||
delete_user,
|
||||
get_user,
|
||||
list_users,
|
||||
|
||||
@@ -6,7 +6,7 @@ from typing import Any
|
||||
from jose import JWTError, jwt
|
||||
from passlib.context import CryptContext
|
||||
|
||||
from backend.app.db import get_conn, get_write_lock
|
||||
from ..db import get_conn, get_write_lock
|
||||
|
||||
_pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
|
||||
|
||||
@@ -35,7 +35,8 @@ def verify_password(plain: str, hashed: str) -> bool:
|
||||
# --- Tokens ---
|
||||
|
||||
def create_access_token(user_id: str, email: str, role: str) -> str:
|
||||
expire = datetime.now(timezone.utc) + timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
|
||||
expire = datetime.now(timezone.utc) + \
|
||||
timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
|
||||
payload = {
|
||||
"sub": user_id,
|
||||
"email": email,
|
||||
@@ -47,7 +48,8 @@ def create_access_token(user_id: str, email: str, role: str) -> str:
|
||||
|
||||
|
||||
def create_refresh_token(user_id: str, jti: str) -> str:
|
||||
expire = datetime.now(timezone.utc) + timedelta(days=REFRESH_TOKEN_EXPIRE_DAYS)
|
||||
expire = datetime.now(timezone.utc) + \
|
||||
timedelta(days=REFRESH_TOKEN_EXPIRE_DAYS)
|
||||
payload = {
|
||||
"sub": user_id,
|
||||
"jti": jti,
|
||||
@@ -68,28 +70,25 @@ async def register_user(email: str, password: str) -> dict[str, Any]:
|
||||
"""Insert a new user. Returns the created user row."""
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
sql_check = "SELECT id FROM users WHERE email = ?"
|
||||
sql_insert = "INSERT INTO users (email, password_hash) VALUES (?, ?)"
|
||||
sql_fetch = "SELECT id, email, role FROM users WHERE email = ?"
|
||||
async with lock:
|
||||
existing = conn.execute(
|
||||
"SELECT id FROM users WHERE email = ?", [email]
|
||||
).fetchone()
|
||||
existing = conn.execute(sql_check, [email]).fetchone()
|
||||
if existing:
|
||||
raise ValueError("Email already registered.")
|
||||
conn.execute(
|
||||
"INSERT INTO users (email, password_hash) VALUES (?, ?)",
|
||||
[email, hash_password(password)],
|
||||
)
|
||||
row = conn.execute(
|
||||
"SELECT id, email, role FROM users WHERE email = ?", [email]
|
||||
).fetchone()
|
||||
conn.execute(sql_insert, [email, hash_password(password)],)
|
||||
row = conn.execute(sql_fetch, [email]).fetchone()
|
||||
if row is None:
|
||||
raise RuntimeError("Failed to fetch user after registration.")
|
||||
return {"id": str(row[0]), "email": row[1], "role": row[2]}
|
||||
|
||||
|
||||
async def authenticate_user(email: str, password: str) -> dict[str, Any] | None:
|
||||
"""Return user dict if credentials are valid, else None."""
|
||||
conn = get_conn()
|
||||
row = conn.execute(
|
||||
"SELECT id, email, password_hash, role FROM users WHERE email = ?", [email]
|
||||
).fetchone()
|
||||
sql_fetch = "SELECT id, email, password_hash, role FROM users WHERE email = ?"
|
||||
row = conn.execute(sql_fetch, [email]).fetchone()
|
||||
if row is None or not verify_password(password, row[2]):
|
||||
return None
|
||||
return {"id": str(row[0]), "email": row[1], "role": row[3]}
|
||||
@@ -99,34 +98,30 @@ async def store_refresh_token(user_id: str, jti: str) -> None:
|
||||
"""Persist a refresh token JTI in the database."""
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
sql_insert = "INSERT INTO refresh_tokens (jti, user_id, expires_at) VALUES (?, ?, ?)"
|
||||
from datetime import timedelta
|
||||
expires_at = datetime.now(timezone.utc) + timedelta(days=REFRESH_TOKEN_EXPIRE_DAYS)
|
||||
expires_at = datetime.now(timezone.utc) + \
|
||||
timedelta(days=REFRESH_TOKEN_EXPIRE_DAYS)
|
||||
async with lock:
|
||||
conn.execute(
|
||||
"INSERT INTO refresh_tokens (jti, user_id, expires_at) VALUES (?, ?, ?)",
|
||||
[jti, user_id, expires_at],
|
||||
)
|
||||
conn.execute(sql_insert, [jti, user_id, expires_at])
|
||||
|
||||
|
||||
async def revoke_refresh_token(jti: str) -> None:
|
||||
"""Mark a refresh token as revoked."""
|
||||
conn = get_conn()
|
||||
lock = get_write_lock()
|
||||
sql_update = "UPDATE refresh_tokens SET revoked = true WHERE jti = ?"
|
||||
async with lock:
|
||||
conn.execute(
|
||||
"UPDATE refresh_tokens SET revoked = true WHERE jti = ?", [jti]
|
||||
)
|
||||
conn.execute(sql_update, [jti])
|
||||
|
||||
|
||||
async def validate_refresh_token_jti(jti: str, user_id: str) -> bool:
|
||||
"""Return True if the JTI exists, is not revoked, and belongs to user_id."""
|
||||
conn = get_conn()
|
||||
now = datetime.now(timezone.utc)
|
||||
row = conn.execute(
|
||||
"""
|
||||
sql_select = """
|
||||
SELECT 1 FROM refresh_tokens
|
||||
WHERE jti = ? AND user_id = ? AND revoked = false AND expires_at > ?
|
||||
""",
|
||||
[jti, user_id, now],
|
||||
).fetchone()
|
||||
"""
|
||||
row = conn.execute(sql_select, [jti, user_id, now]).fetchone()
|
||||
return row is not None
|
||||
|
||||
@@ -0,0 +1,246 @@
|
||||
"""Model cache service: fetch from OpenRouter, store in DuckDB."""
|
||||
import json
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from typing import Any
|
||||
|
||||
import duckdb
|
||||
|
||||
from . import openrouter
|
||||
|
||||
CACHE_TTL_HOURS = 24
|
||||
|
||||
|
||||
def _normalize_modality(raw: str) -> str:
|
||||
"""Normalize OpenRouter modality labels to canonical values."""
|
||||
value = (raw or "").strip().lower()
|
||||
if value in {"text", "image", "video", "audio", "embeddings", "embedding"}:
|
||||
return "embeddings" if value == "embedding" else value
|
||||
if "image" in value:
|
||||
return "image"
|
||||
if "video" in value:
|
||||
return "video"
|
||||
if "audio" in value:
|
||||
return "audio"
|
||||
if "embed" in value:
|
||||
return "embeddings"
|
||||
return "text"
|
||||
|
||||
|
||||
def _parse_modality(raw_modality: str) -> str:
|
||||
"""Extract output modality from OpenRouter architecture.modality string.
|
||||
|
||||
Examples: "text->text", "text+image->text", "text->image", "text->video"
|
||||
"""
|
||||
output = raw_modality.split(
|
||||
"->", 1)[-1] if "->" in raw_modality else raw_modality
|
||||
return _normalize_modality(output)
|
||||
|
||||
|
||||
def _extract_output_modality(model: dict[str, Any]) -> str:
|
||||
"""Extract output modality using OpenRouter schema, fallback to legacy field."""
|
||||
architecture = model.get("architecture") or {}
|
||||
|
||||
output_modalities = architecture.get(
|
||||
"output_modalities") or model.get("output_modalities")
|
||||
if isinstance(output_modalities, list) and output_modalities:
|
||||
return _normalize_modality(str(output_modalities[0]))
|
||||
|
||||
raw_modality = architecture.get(
|
||||
"modality") or model.get("modality") or "text->text"
|
||||
if isinstance(raw_modality, str):
|
||||
return _parse_modality(raw_modality)
|
||||
return "text"
|
||||
|
||||
|
||||
async def _fetch_models_for_cache() -> list[dict[str, Any]]:
|
||||
"""Fetch broad + modality-specific lists and merge unique models by id."""
|
||||
by_id: dict[str, dict[str, Any]] = {}
|
||||
|
||||
# Primary fetch: all modalities (per OpenRouter docs).
|
||||
primary = await openrouter.list_models(output_modalities="all")
|
||||
for model in primary:
|
||||
model_id = model.get("id")
|
||||
if model_id:
|
||||
by_id[model_id] = model
|
||||
|
||||
# Warmup fetches: some providers surface better results with explicit modality filter.
|
||||
for modality in ("image", "video", "audio", "embeddings", "text"):
|
||||
try:
|
||||
subset = await openrouter.list_models(output_modalities=modality)
|
||||
except Exception:
|
||||
continue
|
||||
for model in subset:
|
||||
model_id = model.get("id")
|
||||
if model_id and model_id not in by_id:
|
||||
by_id[model_id] = model
|
||||
|
||||
return list(by_id.values())
|
||||
|
||||
|
||||
async def refresh_models_cache(conn: duckdb.DuckDBPyConnection) -> int:
|
||||
"""Fetch all models from OpenRouter and replace the cache. Returns count stored."""
|
||||
raw = await _fetch_models_for_cache()
|
||||
# Use naive UTC to avoid DuckDB TIMESTAMP tz-stripping inconsistencies
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
|
||||
conn.execute("DELETE FROM models_cache")
|
||||
count = 0
|
||||
for m in raw:
|
||||
modality = _extract_output_modality(m)
|
||||
pricing = m.get("pricing")
|
||||
model_id = m.get("id", "")
|
||||
if not model_id:
|
||||
continue
|
||||
# Full output_modalities array from architecture (for proper modalities param in image gen)
|
||||
architecture = m.get("architecture") or {}
|
||||
raw_output_modalities: list | None = (
|
||||
architecture.get("output_modalities") or m.get("output_modalities")
|
||||
)
|
||||
output_modalities_json: str | None = (
|
||||
json.dumps([_normalize_modality(str(v))
|
||||
for v in raw_output_modalities])
|
||||
if isinstance(raw_output_modalities, list)
|
||||
else None
|
||||
)
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO models_cache (model_id, name, modality, context_length, pricing, fetched_at, output_modalities)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?)
|
||||
ON CONFLICT (model_id) DO UPDATE SET
|
||||
name = excluded.name,
|
||||
modality = excluded.modality,
|
||||
context_length = excluded.context_length,
|
||||
pricing = excluded.pricing,
|
||||
fetched_at = excluded.fetched_at,
|
||||
output_modalities = excluded.output_modalities
|
||||
""",
|
||||
[
|
||||
model_id,
|
||||
m.get("name", model_id),
|
||||
modality,
|
||||
m.get("context_length"),
|
||||
json.dumps(pricing) if pricing else None,
|
||||
now,
|
||||
output_modalities_json,
|
||||
],
|
||||
)
|
||||
count += 1
|
||||
return count
|
||||
|
||||
|
||||
def is_cache_stale(conn: duckdb.DuckDBPyConnection) -> bool:
|
||||
"""Return True if cache is empty or last fetched more than CACHE_TTL_HOURS ago."""
|
||||
row = conn.execute("SELECT MAX(fetched_at) FROM models_cache").fetchone()
|
||||
if not row or row[0] is None:
|
||||
return True
|
||||
last_fetched = row[0]
|
||||
# DuckDB TIMESTAMP is always naive; compare against naive UTC
|
||||
if last_fetched.tzinfo is not None:
|
||||
last_fetched = last_fetched.replace(tzinfo=None)
|
||||
now_naive = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
return now_naive - last_fetched > timedelta(hours=CACHE_TTL_HOURS)
|
||||
|
||||
|
||||
def get_cached_models(
|
||||
conn: duckdb.DuckDBPyConnection,
|
||||
modality: str | None = None,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Return cached models, optionally filtered by modality, ordered by name."""
|
||||
if modality:
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT model_id, name, modality, context_length, pricing
|
||||
FROM models_cache
|
||||
WHERE modality = ?
|
||||
ORDER BY name
|
||||
""",
|
||||
[modality],
|
||||
).fetchall()
|
||||
else:
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT model_id, name, modality, context_length, pricing
|
||||
FROM models_cache
|
||||
ORDER BY name
|
||||
"""
|
||||
).fetchall()
|
||||
|
||||
result = []
|
||||
for row in rows:
|
||||
pricing = None
|
||||
if row[4]:
|
||||
try:
|
||||
pricing = json.loads(row[4])
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
pricing = None
|
||||
result.append({
|
||||
"id": row[0],
|
||||
"name": row[1],
|
||||
"modality": row[2],
|
||||
"context_length": row[3],
|
||||
"pricing": pricing,
|
||||
})
|
||||
return result
|
||||
|
||||
|
||||
def get_model_output_modalities(
|
||||
conn: duckdb.DuckDBPyConnection,
|
||||
model_id: str,
|
||||
) -> list[str]:
|
||||
"""Return output_modalities list for a model; empty list if not found."""
|
||||
row = conn.execute(
|
||||
"SELECT output_modalities FROM models_cache WHERE model_id = ?",
|
||||
[model_id],
|
||||
).fetchone()
|
||||
if not row or not row[0]:
|
||||
return []
|
||||
try:
|
||||
return json.loads(row[0])
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
return []
|
||||
|
||||
|
||||
def get_cache_status(conn: duckdb.DuckDBPyConnection) -> dict[str, Any]:
|
||||
"""Return cache last update time and model count."""
|
||||
row = conn.execute(
|
||||
"SELECT MAX(fetched_at), COUNT(*) FROM models_cache"
|
||||
).fetchone()
|
||||
last_updated, model_count = (row[0], row[1]) if row else (None, 0)
|
||||
return {"last_updated": last_updated, "model_count": model_count}
|
||||
|
||||
|
||||
def mark_timed_out_video_jobs(conn: duckdb.DuckDBPyConnection, timeout_minutes: int = 120) -> int:
|
||||
"""Mark video jobs that have been in 'queued' or 'processing' status for too long as 'failed'.
|
||||
|
||||
Returns the number of jobs marked as timed out.
|
||||
"""
|
||||
timeout_threshold = datetime.now(
|
||||
timezone.utc) - timedelta(minutes=timeout_minutes)
|
||||
|
||||
# Find timed out jobs
|
||||
timed_out_rows = conn.execute(
|
||||
"""
|
||||
SELECT id FROM generated_videos
|
||||
WHERE status IN ('queued', 'processing')
|
||||
AND updated_at < ?
|
||||
""",
|
||||
[timeout_threshold]
|
||||
).fetchall()
|
||||
|
||||
if not timed_out_rows:
|
||||
return 0
|
||||
|
||||
job_ids = [row[0] for row in timed_out_rows]
|
||||
placeholders = ",".join(["?"] * len(job_ids))
|
||||
|
||||
# Update them to failed
|
||||
conn.execute(
|
||||
f"""
|
||||
UPDATE generated_videos
|
||||
SET status = 'failed', updated_at = ?
|
||||
WHERE id IN ({placeholders})
|
||||
""",
|
||||
[datetime.now(timezone.utc)] + job_ids
|
||||
)
|
||||
|
||||
return len(job_ids)
|
||||
@@ -20,16 +20,29 @@ def _headers() -> dict[str, str]:
|
||||
"Authorization": f"Bearer {_api_key()}",
|
||||
"Content-Type": "application/json",
|
||||
"HTTP-Referer": os.getenv("APP_URL", "https://ai.allucanget.biz"),
|
||||
"X-Title": os.getenv("APP_NAME", "AI Allucanget"),
|
||||
"X-Title": os.getenv("APP_NAME", "All You Can GET AI"),
|
||||
}
|
||||
|
||||
|
||||
async def list_models() -> list[dict[str, Any]]:
|
||||
"""Return available models from OpenRouter."""
|
||||
async def list_models(
|
||||
output_modalities: str = "all",
|
||||
category: str | None = None,
|
||||
supported_parameters: str | None = None,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Return available models from OpenRouter.
|
||||
|
||||
Docs: GET /models supports query filters like output_modalities.
|
||||
"""
|
||||
base_url = os.getenv("OPENROUTER_BASE_URL", OPENROUTER_BASE_URL)
|
||||
params: dict[str, str] = {"output_modalities": output_modalities}
|
||||
if category:
|
||||
params["category"] = category
|
||||
if supported_parameters:
|
||||
params["supported_parameters"] = supported_parameters
|
||||
|
||||
async with httpx.AsyncClient(timeout=15) as client:
|
||||
resp = client.build_request(
|
||||
"GET", f"{base_url}/models", headers=_headers())
|
||||
"GET", f"{base_url}/models", headers=_headers(), params=params)
|
||||
response = await client.send(resp)
|
||||
response.raise_for_status()
|
||||
return response.json().get("data", [])
|
||||
@@ -82,8 +95,9 @@ async def generate_video(
|
||||
duration_seconds: int | None = None,
|
||||
aspect_ratio: str = "16:9",
|
||||
resolution: str | None = None,
|
||||
generate_audio: bool | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Request text-to-video generation via OpenRouter."""
|
||||
"""Request text-to-video generation via OpenRouter POST /videos."""
|
||||
base_url = os.getenv("OPENROUTER_BASE_URL", OPENROUTER_BASE_URL)
|
||||
payload: dict[str, Any] = {
|
||||
"model": model,
|
||||
@@ -91,9 +105,12 @@ async def generate_video(
|
||||
"aspect_ratio": aspect_ratio,
|
||||
}
|
||||
if duration_seconds is not None:
|
||||
payload["duration_seconds"] = duration_seconds
|
||||
# API uses 'duration' not 'duration_seconds'
|
||||
payload["duration"] = duration_seconds
|
||||
if resolution is not None:
|
||||
payload["resolution"] = resolution
|
||||
if generate_audio is not None:
|
||||
payload["generate_audio"] = generate_audio
|
||||
async with httpx.AsyncClient(timeout=120) as client:
|
||||
resp = client.build_request(
|
||||
"POST", f"{base_url}/videos", headers=_headers(), json=payload
|
||||
@@ -110,19 +127,31 @@ async def generate_video_from_image(
|
||||
duration_seconds: int | None = None,
|
||||
aspect_ratio: str = "16:9",
|
||||
resolution: str | None = None,
|
||||
generate_audio: bool | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Request image-to-video generation via OpenRouter."""
|
||||
"""Request image-to-video generation via OpenRouter POST /videos.
|
||||
|
||||
Uses frame_images array with first_frame as per OpenRouter API spec.
|
||||
"""
|
||||
base_url = os.getenv("OPENROUTER_BASE_URL", OPENROUTER_BASE_URL)
|
||||
payload: dict[str, Any] = {
|
||||
"model": model,
|
||||
"image_url": image_url,
|
||||
"prompt": prompt,
|
||||
"aspect_ratio": aspect_ratio,
|
||||
"frame_images": [
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {"url": image_url},
|
||||
"frame_type": "first_frame",
|
||||
}
|
||||
],
|
||||
}
|
||||
if duration_seconds is not None:
|
||||
payload["duration_seconds"] = duration_seconds
|
||||
payload["duration"] = duration_seconds
|
||||
if resolution is not None:
|
||||
payload["resolution"] = resolution
|
||||
if generate_audio is not None:
|
||||
payload["generate_audio"] = generate_audio
|
||||
async with httpx.AsyncClient(timeout=120) as client:
|
||||
resp = client.build_request(
|
||||
"POST", f"{base_url}/videos", headers=_headers(), json=payload
|
||||
@@ -141,6 +170,18 @@ async def poll_video_status(polling_url: str) -> dict[str, Any]:
|
||||
return response.json()
|
||||
|
||||
|
||||
async def list_video_models() -> list[dict[str, Any]]:
|
||||
"""Return video generation models from the dedicated /videos/models endpoint."""
|
||||
base_url = os.getenv("OPENROUTER_BASE_URL", OPENROUTER_BASE_URL)
|
||||
async with httpx.AsyncClient(timeout=15) as client:
|
||||
resp = client.build_request(
|
||||
"GET", f"{base_url}/videos/models", headers=_headers()
|
||||
)
|
||||
response = await client.send(resp)
|
||||
response.raise_for_status()
|
||||
return response.json().get("data", [])
|
||||
|
||||
|
||||
async def generate_image_chat(
|
||||
model: str,
|
||||
prompt: str,
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
"""User management service: CRUD helpers against DuckDB."""
|
||||
from typing import Any
|
||||
|
||||
from backend.app.db import get_conn, get_write_lock
|
||||
from backend.app.services.auth import hash_password
|
||||
from ..db import get_conn, get_write_lock
|
||||
from .auth import hash_password
|
||||
|
||||
|
||||
async def get_user(user_id: str) -> dict[str, Any] | None:
|
||||
|
||||
@@ -0,0 +1,159 @@
|
||||
"""Background worker: processes queued/processing video generation jobs."""
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
from datetime import datetime, timezone
|
||||
|
||||
import duckdb
|
||||
|
||||
from . import openrouter
|
||||
from .models import mark_timed_out_video_jobs
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Interval between worker ticks (seconds)
|
||||
WORKER_INTERVAL = 15
|
||||
# Jobs to process per tick (prevents unbounded bursts)
|
||||
BATCH_SIZE = 5
|
||||
|
||||
|
||||
async def process_queued_jobs(conn: duckdb.DuckDBPyConnection, lock: asyncio.Lock) -> int:
|
||||
"""Submit queued jobs to OpenRouter and transition them to 'processing'."""
|
||||
rows = conn.execute(
|
||||
"""SELECT id, generation_type, request_params
|
||||
FROM generated_videos
|
||||
WHERE status = 'queued' AND request_params IS NOT NULL
|
||||
ORDER BY created_at ASC
|
||||
LIMIT ?""",
|
||||
[BATCH_SIZE],
|
||||
).fetchall()
|
||||
|
||||
processed = 0
|
||||
for row in rows:
|
||||
db_id, generation_type, raw_params = str(row[0]), row[1], row[2]
|
||||
try:
|
||||
params = json.loads(raw_params)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
logger.error("Bad request_params for video job %s", db_id)
|
||||
continue
|
||||
|
||||
try:
|
||||
if generation_type == "image_to_video":
|
||||
result = await openrouter.generate_video_from_image(
|
||||
model=params["model"],
|
||||
image_url=params.get("image_url", ""),
|
||||
prompt=params.get("prompt", ""),
|
||||
duration_seconds=params.get("duration_seconds"),
|
||||
aspect_ratio=params.get("aspect_ratio", "16:9"),
|
||||
resolution=params.get("resolution"),
|
||||
)
|
||||
else:
|
||||
result = await openrouter.generate_video(
|
||||
model=params["model"],
|
||||
prompt=params.get("prompt", ""),
|
||||
duration_seconds=params.get("duration_seconds"),
|
||||
aspect_ratio=params.get("aspect_ratio", "16:9"),
|
||||
resolution=params.get("resolution"),
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.warning("OpenRouter call failed for job %s: %s", db_id, exc)
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
async with lock:
|
||||
conn.execute(
|
||||
"UPDATE generated_videos SET status = 'failed', error = ?, updated_at = ? WHERE id = ?",
|
||||
[str(exc), now, db_id],
|
||||
)
|
||||
continue
|
||||
|
||||
job_id = result.get("id", "")
|
||||
polling_url = result.get("polling_url")
|
||||
new_status = result.get("status", "processing")
|
||||
# Normalise terminal statuses returned immediately (rare but possible)
|
||||
if new_status not in ("queued", "processing", "completed", "failed", "cancelled"):
|
||||
new_status = "processing"
|
||||
|
||||
urls = result.get("unsigned_urls") or result.get("video_urls")
|
||||
video_url = (urls or [None])[0]
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
|
||||
async with lock:
|
||||
conn.execute(
|
||||
"""UPDATE generated_videos
|
||||
SET job_id = ?, polling_url = ?, status = ?, video_url = ?, updated_at = ?
|
||||
WHERE id = ?""",
|
||||
[job_id, polling_url, new_status, video_url, now, db_id],
|
||||
)
|
||||
processed += 1
|
||||
logger.info("Video job %s → %s (provider id: %s)",
|
||||
db_id, new_status, job_id)
|
||||
|
||||
return processed
|
||||
|
||||
|
||||
async def process_processing_jobs(conn: duckdb.DuckDBPyConnection, lock: asyncio.Lock) -> int:
|
||||
"""Poll in-progress jobs and update to 'completed' or 'failed'."""
|
||||
rows = conn.execute(
|
||||
"""SELECT id, polling_url
|
||||
FROM generated_videos
|
||||
WHERE status = 'processing' AND polling_url IS NOT NULL
|
||||
ORDER BY updated_at ASC
|
||||
LIMIT ?""",
|
||||
[BATCH_SIZE],
|
||||
).fetchall()
|
||||
|
||||
updated = 0
|
||||
for row in rows:
|
||||
db_id, polling_url = str(row[0]), row[1]
|
||||
try:
|
||||
result = await openrouter.poll_video_status(polling_url)
|
||||
except Exception as exc:
|
||||
logger.warning("Polling failed for job %s: %s", db_id, exc)
|
||||
continue
|
||||
|
||||
job_status = result.get("status", "processing")
|
||||
if job_status not in ("completed", "failed"):
|
||||
continue # still in-progress — check again next tick
|
||||
|
||||
urls = result.get("unsigned_urls") or result.get("video_urls")
|
||||
video_url = (urls or [None])[0]
|
||||
error_msg = result.get("error")
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
|
||||
async with lock:
|
||||
conn.execute(
|
||||
"""UPDATE generated_videos
|
||||
SET status = ?, video_url = ?, error = ?, updated_at = ?
|
||||
WHERE id = ?""",
|
||||
[job_status, video_url, error_msg, now, db_id],
|
||||
)
|
||||
updated += 1
|
||||
logger.info("Video job %s → %s", db_id, job_status)
|
||||
|
||||
return updated
|
||||
|
||||
|
||||
async def worker_tick(conn: duckdb.DuckDBPyConnection, lock: asyncio.Lock) -> None:
|
||||
"""Single worker tick: submit queued, poll processing, expire timed-out."""
|
||||
queued = await process_queued_jobs(conn, lock)
|
||||
polled = await process_processing_jobs(conn, lock)
|
||||
async with lock:
|
||||
timed_out = mark_timed_out_video_jobs(conn, timeout_minutes=120)
|
||||
if queued or polled or timed_out:
|
||||
logger.info(
|
||||
"Worker tick: submitted=%d polled=%d timed_out=%d",
|
||||
queued, polled, timed_out,
|
||||
)
|
||||
|
||||
|
||||
async def run_worker(conn: duckdb.DuckDBPyConnection, lock: asyncio.Lock) -> None:
|
||||
"""Infinite loop: run a worker tick every WORKER_INTERVAL seconds."""
|
||||
logger.info("Video worker started (interval=%ds)", WORKER_INTERVAL)
|
||||
while True:
|
||||
try:
|
||||
await worker_tick(conn, lock)
|
||||
except asyncio.CancelledError:
|
||||
logger.info("Video worker stopped.")
|
||||
return
|
||||
except Exception as exc:
|
||||
logger.exception("Unexpected error in video worker: %s", exc)
|
||||
await asyncio.sleep(WORKER_INTERVAL)
|
||||
@@ -1,13 +0,0 @@
|
||||
# Nixpacks configuration for the FastAPI backend
|
||||
|
||||
[phases.setup]
|
||||
nixpkgsArchive = "88a9d1386465831607986442fd9c8c0e7a1b2f5"
|
||||
aptPkgs = ["git"]
|
||||
|
||||
[phases.install]
|
||||
# Nixpacks auto-detects Python and runs pip install -r requirements.txt
|
||||
|
||||
[build]
|
||||
|
||||
[deploy]
|
||||
startCommand = "uvicorn backend.app.main:app --host 0.0.0.0 --port 8000"
|
||||
@@ -0,0 +1,13 @@
|
||||
# Dev-only dependencies for local development and testing
|
||||
# Production dependencies are in requirements.txt
|
||||
|
||||
pytest
|
||||
pytest-asyncio
|
||||
Flask
|
||||
gunicorn
|
||||
Pygments
|
||||
tomli
|
||||
exceptiongroup
|
||||
iniconfig
|
||||
pluggy
|
||||
|
||||
@@ -0,0 +1,21 @@
|
||||
anyio
|
||||
bcrypt==4.0.1
|
||||
blinker
|
||||
certifi
|
||||
cryptography
|
||||
dnspython
|
||||
duckdb
|
||||
ecdsa
|
||||
email-validator
|
||||
fastapi
|
||||
httpcore
|
||||
httpx
|
||||
Jinja2
|
||||
MarkupSafe
|
||||
packaging
|
||||
passlib==1.7.4
|
||||
pydantic
|
||||
python-dotenv
|
||||
python-jose
|
||||
python-multipart
|
||||
uvicorn
|
||||
@@ -4,8 +4,8 @@ import pytest
|
||||
import pytest_asyncio
|
||||
from httpx import AsyncClient, ASGITransport
|
||||
|
||||
from backend.app.main import app
|
||||
from backend.app import db as db_module
|
||||
from app.main import app
|
||||
from app import db as db_module
|
||||
|
||||
os.environ.setdefault("JWT_SECRET", "test-secret-key-for-testing-only")
|
||||
|
||||
@@ -53,7 +53,8 @@ async def test_stats_as_admin(client):
|
||||
resp = await client.get("/admin/stats", headers={"Authorization": f"Bearer {token}"})
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert data["users"]["total"] == 3 # 2 users + 1 admin
|
||||
# 2 users + 1 admin + 1 seeded admin (ai@allucanget.biz)
|
||||
assert data["users"]["total"] == 4
|
||||
assert "by_role" in data["users"]
|
||||
assert "refresh_tokens" in data
|
||||
|
||||
|
||||
@@ -5,8 +5,8 @@ import pytest_asyncio
|
||||
from unittest.mock import AsyncMock, patch
|
||||
from httpx import AsyncClient, ASGITransport
|
||||
|
||||
from backend.app.main import app
|
||||
from backend.app import db as db_module
|
||||
from app.main import app
|
||||
from app import db as db_module
|
||||
|
||||
os.environ.setdefault("JWT_SECRET", "test-secret-key-for-testing-only")
|
||||
os.environ.setdefault("OPENROUTER_API_KEY", "test-key")
|
||||
@@ -53,7 +53,7 @@ async def _user_token(client):
|
||||
async def test_list_models(client):
|
||||
token = await _user_token(client)
|
||||
with patch(
|
||||
"backend.app.routers.ai.openrouter.list_models",
|
||||
"app.routers.ai.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
return_value=FAKE_MODELS,
|
||||
):
|
||||
@@ -74,7 +74,7 @@ async def test_list_models_unauthenticated(client):
|
||||
async def test_list_models_upstream_error(client):
|
||||
token = await _user_token(client)
|
||||
with patch(
|
||||
"backend.app.routers.ai.openrouter.list_models",
|
||||
"app.routers.ai.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
side_effect=Exception("Connection refused"),
|
||||
):
|
||||
@@ -91,7 +91,7 @@ async def test_list_models_upstream_error(client):
|
||||
async def test_chat_success(client):
|
||||
token = await _user_token(client)
|
||||
with patch(
|
||||
"backend.app.routers.ai.openrouter.chat_completion",
|
||||
"app.routers.ai.openrouter.chat_completion",
|
||||
new_callable=AsyncMock,
|
||||
return_value=FAKE_CHAT_RESPONSE,
|
||||
):
|
||||
@@ -115,7 +115,7 @@ async def test_chat_success(client):
|
||||
async def test_chat_passes_parameters(client):
|
||||
token = await _user_token(client)
|
||||
mock = AsyncMock(return_value=FAKE_CHAT_RESPONSE)
|
||||
with patch("backend.app.routers.ai.openrouter.chat_completion", new_callable=AsyncMock, return_value=FAKE_CHAT_RESPONSE) as mock:
|
||||
with patch("app.routers.ai.openrouter.chat_completion", new_callable=AsyncMock, return_value=FAKE_CHAT_RESPONSE) as mock:
|
||||
await client.post(
|
||||
"/ai/chat",
|
||||
json={
|
||||
@@ -145,7 +145,7 @@ async def test_chat_unauthenticated(client):
|
||||
async def test_chat_upstream_error(client):
|
||||
token = await _user_token(client)
|
||||
with patch(
|
||||
"backend.app.routers.ai.openrouter.chat_completion",
|
||||
"app.routers.ai.openrouter.chat_completion",
|
||||
new_callable=AsyncMock,
|
||||
side_effect=Exception("timeout"),
|
||||
):
|
||||
@@ -160,7 +160,7 @@ async def test_chat_upstream_error(client):
|
||||
async def test_chat_malformed_upstream_response(client):
|
||||
token = await _user_token(client)
|
||||
with patch(
|
||||
"backend.app.routers.ai.openrouter.chat_completion",
|
||||
"app.routers.ai.openrouter.chat_completion",
|
||||
new_callable=AsyncMock,
|
||||
return_value={"id": "x", "choices": []}, # empty choices
|
||||
):
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
"""Integration tests for auth endpoints using in-memory DuckDB."""
|
||||
from backend.app.main import app
|
||||
from backend.app import db as db_module
|
||||
from app.main import app
|
||||
from app import db as db_module
|
||||
from httpx import AsyncClient, ASGITransport
|
||||
import os
|
||||
import pytest
|
||||
|
||||
@@ -3,7 +3,7 @@ import asyncio
|
||||
import pytest
|
||||
import duckdb
|
||||
|
||||
from backend.app import db as db_module
|
||||
from app import db as db_module
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
@@ -192,3 +192,39 @@ async def test_write_lock_serialises_concurrent_writes():
|
||||
# Each writer's start and end must be adjacent (no interleaving)
|
||||
assert order.index("A-start") + 1 == order.index("A-end") or \
|
||||
order.index("B-start") + 1 == order.index("B-end")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Admin seed user
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_seed_admin_user_created_on_init():
|
||||
import os
|
||||
conn = db_module.init_db(":memory:")
|
||||
row = conn.execute(
|
||||
"SELECT email, role FROM users WHERE email = 'ai@allucanget.biz'"
|
||||
).fetchone()
|
||||
assert row is not None
|
||||
assert row[0] == "ai@allucanget.biz"
|
||||
assert row[1] == "admin"
|
||||
|
||||
|
||||
def test_seed_admin_is_idempotent():
|
||||
conn = db_module.init_db(":memory:")
|
||||
# Simulate re-running seed (second init_db call reuses connection, so call _seed_admin directly)
|
||||
db_module._seed_admin(conn)
|
||||
count = conn.execute(
|
||||
"SELECT COUNT(*) FROM users WHERE email = 'ai@allucanget.biz'"
|
||||
).fetchone()[0]
|
||||
assert count == 1
|
||||
|
||||
|
||||
def test_seed_admin_email_env_override(monkeypatch):
|
||||
monkeypatch.setenv("ADMIN_EMAIL", "custom@example.com")
|
||||
monkeypatch.setenv("ADMIN_PASSWORD", "custompass")
|
||||
conn = db_module.init_db(":memory:")
|
||||
row = conn.execute(
|
||||
"SELECT email, role FROM users WHERE email = 'custom@example.com'"
|
||||
).fetchone()
|
||||
assert row is not None
|
||||
assert row[1] == "admin"
|
||||
|
||||
+306
-81
@@ -5,8 +5,8 @@ import pytest_asyncio
|
||||
from unittest.mock import AsyncMock, patch
|
||||
from httpx import AsyncClient, ASGITransport
|
||||
|
||||
from backend.app.main import app
|
||||
from backend.app import db as db_module
|
||||
from app.main import app
|
||||
from app import db as db_module
|
||||
|
||||
os.environ.setdefault("JWT_SECRET", "test-secret-key-for-testing-only")
|
||||
os.environ.setdefault("OPENROUTER_API_KEY", "test-key")
|
||||
@@ -18,15 +18,6 @@ FAKE_CHAT = {
|
||||
"usage": {"prompt_tokens": 5, "completion_tokens": 10, "total_tokens": 15},
|
||||
}
|
||||
|
||||
FAKE_IMAGE = {
|
||||
"id": "gen-img-1",
|
||||
"model": "openai/dall-e-3",
|
||||
"data": [
|
||||
{"url": "https://example.com/image.png",
|
||||
"revised_prompt": "A cat on the moon"},
|
||||
],
|
||||
}
|
||||
|
||||
FAKE_VIDEO = {
|
||||
"id": "gen-vid-1",
|
||||
"polling_url": "https://openrouter.ai/api/v1/videos/gen-vid-1",
|
||||
@@ -69,7 +60,7 @@ async def _user_token(client):
|
||||
|
||||
async def test_generate_text(client):
|
||||
token = await _user_token(client)
|
||||
with patch("backend.app.routers.generate.openrouter.chat_completion", new_callable=AsyncMock, return_value=FAKE_CHAT):
|
||||
with patch("app.routers.generate.openrouter.chat_completion", new_callable=AsyncMock, return_value=FAKE_CHAT):
|
||||
resp = await client.post(
|
||||
"/generate/text",
|
||||
json={"model": "openai/gpt-4o", "prompt": "Tell me a story"},
|
||||
@@ -85,7 +76,7 @@ async def test_generate_text(client):
|
||||
async def test_generate_text_with_system_prompt(client):
|
||||
token = await _user_token(client)
|
||||
mock = AsyncMock(return_value=FAKE_CHAT)
|
||||
with patch("backend.app.routers.generate.openrouter.chat_completion", mock):
|
||||
with patch("app.routers.generate.openrouter.chat_completion", mock):
|
||||
await client.post(
|
||||
"/generate/text",
|
||||
json={"model": "openai/gpt-4o", "prompt": "Hello",
|
||||
@@ -97,6 +88,44 @@ async def test_generate_text_with_system_prompt(client):
|
||||
assert call_messages[1] == {"role": "user", "content": "Hello"}
|
||||
|
||||
|
||||
async def test_generate_text_with_messages_array(client):
|
||||
"""messages field takes precedence over prompt for multi-turn chat."""
|
||||
token = await _user_token(client)
|
||||
mock = AsyncMock(return_value=FAKE_CHAT)
|
||||
messages = [
|
||||
{"role": "user", "content": "First message"},
|
||||
{"role": "assistant", "content": "Reply"},
|
||||
{"role": "user", "content": "Follow up"},
|
||||
]
|
||||
with patch("app.routers.generate.openrouter.chat_completion", mock):
|
||||
resp = await client.post(
|
||||
"/generate/text",
|
||||
json={"model": "openai/gpt-4o", "messages": messages},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
call_messages = mock.call_args.kwargs["messages"]
|
||||
assert len(call_messages) == 3
|
||||
assert call_messages[2]["content"] == "Follow up"
|
||||
|
||||
|
||||
async def test_generate_text_messages_with_system_prompt(client):
|
||||
"""system_prompt prepended when messages provided and no system msg present."""
|
||||
token = await _user_token(client)
|
||||
mock = AsyncMock(return_value=FAKE_CHAT)
|
||||
messages = [{"role": "user", "content": "Hi"}]
|
||||
with patch("app.routers.generate.openrouter.chat_completion", mock):
|
||||
await client.post(
|
||||
"/generate/text",
|
||||
json={"model": "openai/gpt-4o", "messages": messages,
|
||||
"system_prompt": "Be brief."},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
call_messages = mock.call_args.kwargs["messages"]
|
||||
assert call_messages[0] == {"role": "system", "content": "Be brief."}
|
||||
assert call_messages[1] == {"role": "user", "content": "Hi"}
|
||||
|
||||
|
||||
async def test_generate_text_unauthenticated(client):
|
||||
resp = await client.post("/generate/text", json={"model": "openai/gpt-4o", "prompt": "Hi"})
|
||||
assert resp.status_code == 401
|
||||
@@ -104,7 +133,7 @@ async def test_generate_text_unauthenticated(client):
|
||||
|
||||
async def test_generate_text_upstream_error(client):
|
||||
token = await _user_token(client)
|
||||
with patch("backend.app.routers.generate.openrouter.chat_completion", new_callable=AsyncMock, side_effect=Exception("timeout")):
|
||||
with patch("app.routers.generate.openrouter.chat_completion", new_callable=AsyncMock, side_effect=Exception("timeout")):
|
||||
resp = await client.post(
|
||||
"/generate/text",
|
||||
json={"model": "openai/gpt-4o", "prompt": "Hi"},
|
||||
@@ -117,47 +146,13 @@ async def test_generate_text_upstream_error(client):
|
||||
# POST /generate/image
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_generate_image(client):
|
||||
token = await _user_token(client)
|
||||
with patch("backend.app.routers.generate.openrouter.generate_image", new_callable=AsyncMock, return_value=FAKE_IMAGE):
|
||||
resp = await client.post(
|
||||
"/generate/image",
|
||||
json={"model": "openai/dall-e-3", "prompt": "A cat on the moon"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert data["id"] == "gen-img-1"
|
||||
assert len(data["images"]) == 1
|
||||
assert data["images"][0]["url"] == "https://example.com/image.png"
|
||||
assert data["images"][0]["revised_prompt"] == "A cat on the moon"
|
||||
|
||||
|
||||
async def test_generate_image_unauthenticated(client):
|
||||
resp = await client.post("/generate/image", json={"model": "openai/dall-e-3", "prompt": "Hi"})
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
async def test_generate_image_upstream_error(client):
|
||||
token = await _user_token(client)
|
||||
with patch("backend.app.routers.generate.openrouter.generate_image", new_callable=AsyncMock, side_effect=Exception("rate limit")):
|
||||
resp = await client.post(
|
||||
"/generate/image",
|
||||
json={"model": "openai/dall-e-3", "prompt": "Hi"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 502
|
||||
|
||||
|
||||
# --- Chat-based image generation (FLUX, GPT-5 Image Mini) ---
|
||||
|
||||
FAKE_IMAGE_CHAT_FLUX = {
|
||||
"id": "gen-img-chat-1",
|
||||
"model": "black-forest-labs/flux.2-klein-4b",
|
||||
"choices": [{
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": "Here is your generated image.",
|
||||
"content": None,
|
||||
"images": [{
|
||||
"type": "image_url",
|
||||
"image_url": {"url": "data:image/png;base64,abc123"},
|
||||
@@ -181,45 +176,65 @@ FAKE_IMAGE_CHAT_GPT5 = {
|
||||
}],
|
||||
}
|
||||
|
||||
FAKE_IMAGE_CHAT_GEMINI = {
|
||||
"id": "gen-img-chat-3",
|
||||
"model": "google/gemini-2.5-flash-image",
|
||||
"choices": [{
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": "Here is your image.",
|
||||
"images": [{
|
||||
"type": "image_url",
|
||||
"image_url": {"url": "data:image/png;base64,gemini123"},
|
||||
}],
|
||||
}
|
||||
}],
|
||||
}
|
||||
|
||||
async def test_generate_image_chat_flux(client):
|
||||
|
||||
async def test_generate_image(client):
|
||||
"""All models now use generate_image_chat (chat completions endpoint)."""
|
||||
token = await _user_token(client)
|
||||
with patch("backend.app.routers.generate.openrouter.generate_image_chat", new_callable=AsyncMock, return_value=FAKE_IMAGE_CHAT_FLUX):
|
||||
with patch("app.routers.generate.openrouter.generate_image_chat", new_callable=AsyncMock, return_value=FAKE_IMAGE_CHAT_GEMINI):
|
||||
resp = await client.post(
|
||||
"/generate/image",
|
||||
json={"model": "black-forest-labs/flux.2-klein-4b",
|
||||
"prompt": "A sunset"},
|
||||
json={"model": "google/gemini-2.5-flash-image",
|
||||
"prompt": "A cat on the moon"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert data["id"] == "gen-img-chat-1"
|
||||
assert data["id"] == "gen-img-chat-3"
|
||||
assert len(data["images"]) == 1
|
||||
assert data["images"][0]["url"] == "data:image/png;base64,abc123"
|
||||
assert data["images"][0]["url"] == "data:image/png;base64,gemini123"
|
||||
assert data["images"][0]["image_id"] is not None # stored in DB
|
||||
|
||||
|
||||
async def test_generate_image_chat_gpt5_image_mini(client):
|
||||
async def test_generate_image_unauthenticated(client):
|
||||
resp = await client.post("/generate/image", json={"model": "google/gemini-2.5-flash-image", "prompt": "Hi"})
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
async def test_generate_image_upstream_error(client):
|
||||
token = await _user_token(client)
|
||||
with patch("backend.app.routers.generate.openrouter.generate_image_chat", new_callable=AsyncMock, return_value=FAKE_IMAGE_CHAT_GPT5):
|
||||
with patch("app.routers.generate.openrouter.generate_image_chat", new_callable=AsyncMock, side_effect=Exception("rate limit")):
|
||||
resp = await client.post(
|
||||
"/generate/image",
|
||||
json={"model": "openai/gpt-5-image-mini", "prompt": "A cat"},
|
||||
json={"model": "google/gemini-2.5-flash-image", "prompt": "Hi"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert data["model"] == "openai/gpt-5-image-mini"
|
||||
assert len(data["images"]) == 1
|
||||
assert resp.status_code == 502
|
||||
|
||||
|
||||
async def test_generate_image_chat_with_image_config(client):
|
||||
async def test_generate_image_with_image_config(client):
|
||||
"""Passes aspect_ratio + image_size through to generate_image_chat."""
|
||||
token = await _user_token(client)
|
||||
mock = AsyncMock(return_value=FAKE_IMAGE_CHAT_FLUX)
|
||||
with patch("backend.app.routers.generate.openrouter.generate_image_chat", mock):
|
||||
mock = AsyncMock(return_value=FAKE_IMAGE_CHAT_GEMINI)
|
||||
with patch("app.routers.generate.openrouter.generate_image_chat", mock):
|
||||
await client.post(
|
||||
"/generate/image",
|
||||
json={
|
||||
"model": "black-forest-labs/flux.2-klein-4b",
|
||||
"model": "google/gemini-2.5-flash-image",
|
||||
"prompt": "A landscape",
|
||||
"aspect_ratio": "16:9",
|
||||
"image_size": "2K",
|
||||
@@ -229,23 +244,112 @@ async def test_generate_image_chat_with_image_config(client):
|
||||
call_kwargs = mock.call_args.kwargs
|
||||
assert call_kwargs["image_config"]["aspect_ratio"] == "16:9"
|
||||
assert call_kwargs["image_config"]["image_size"] == "2K"
|
||||
assert call_kwargs["modalities"] == ["image"]
|
||||
|
||||
|
||||
async def test_generate_image_chat_unauthenticated(client):
|
||||
resp = await client.post("/generate/image", json={"model": "flux.2-klein-4b", "prompt": "Hi"})
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
async def test_generate_image_chat_upstream_error(client):
|
||||
async def test_generate_image_default_modalities_image_text(client):
|
||||
"""Model not in cache → default modalities = ['image', 'text']."""
|
||||
token = await _user_token(client)
|
||||
with patch("backend.app.routers.generate.openrouter.generate_image_chat", new_callable=AsyncMock, side_effect=Exception("timeout")):
|
||||
mock = AsyncMock(return_value=FAKE_IMAGE_CHAT_GEMINI)
|
||||
with patch("app.routers.generate.openrouter.generate_image_chat", mock):
|
||||
await client.post(
|
||||
"/generate/image",
|
||||
json={"model": "google/gemini-2.5-flash-image", "prompt": "Hi"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert mock.call_args.kwargs["modalities"] == ["image", "text"]
|
||||
|
||||
|
||||
async def test_generate_image_image_only_modalities_from_cache(client):
|
||||
"""Model cached with image-only output_modalities → modalities = ['image']."""
|
||||
from app import db as db_module
|
||||
from app.services.models import get_model_output_modalities
|
||||
import json as _json
|
||||
token = await _user_token(client)
|
||||
|
||||
# Seed cache with image-only model
|
||||
conn = db_module.get_conn()
|
||||
from datetime import datetime, timezone
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
conn.execute(
|
||||
"DELETE FROM models_cache WHERE model_id = 'black-forest-labs/flux.2-pro'"
|
||||
)
|
||||
conn.execute(
|
||||
"""INSERT INTO models_cache (model_id, name, modality, context_length, pricing, fetched_at, output_modalities)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?)""",
|
||||
["black-forest-labs/flux.2-pro", "FLUX.2 Pro", "image", None, None, now,
|
||||
_json.dumps(["image"])],
|
||||
)
|
||||
|
||||
mock = AsyncMock(return_value=FAKE_IMAGE_CHAT_FLUX)
|
||||
with patch("app.routers.generate.openrouter.generate_image_chat", mock):
|
||||
resp = await client.post(
|
||||
"/generate/image",
|
||||
json={"model": "black-forest-labs/flux.2-klein-4b", "prompt": "Hi"},
|
||||
json={"model": "black-forest-labs/flux.2-pro", "prompt": "Sky"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
assert mock.call_args.kwargs["modalities"] == ["image"]
|
||||
|
||||
|
||||
async def test_generate_image_no_images_in_response(client):
|
||||
"""502 when model returns no images."""
|
||||
token = await _user_token(client)
|
||||
empty_response = {
|
||||
"id": "gen-empty",
|
||||
"model": "google/gemini-2.5-flash-image",
|
||||
"choices": [{"message": {"role": "assistant", "content": "ok", "images": []}}],
|
||||
}
|
||||
with patch("app.routers.generate.openrouter.generate_image_chat",
|
||||
new_callable=AsyncMock, return_value=empty_response):
|
||||
resp = await client.post(
|
||||
"/generate/image",
|
||||
json={"model": "google/gemini-2.5-flash-image", "prompt": "Hi"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 502
|
||||
assert "No images returned" in resp.json()["detail"]
|
||||
|
||||
|
||||
async def test_generate_image_flux(client):
|
||||
"""Flux model works correctly via chat completions."""
|
||||
token = await _user_token(client)
|
||||
with patch("app.routers.generate.openrouter.generate_image_chat",
|
||||
new_callable=AsyncMock, return_value=FAKE_IMAGE_CHAT_FLUX):
|
||||
resp = await client.post(
|
||||
"/generate/image",
|
||||
json={"model": "black-forest-labs/flux.2-klein-4b",
|
||||
"prompt": "A sunset"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert data["images"][0]["url"] == "data:image/png;base64,abc123"
|
||||
|
||||
|
||||
async def test_generate_image_stored_in_db(client):
|
||||
"""Generated image row persists in generated_images table."""
|
||||
from app import db as db_module
|
||||
token = await _user_token(client)
|
||||
with patch("app.routers.generate.openrouter.generate_image_chat",
|
||||
new_callable=AsyncMock, return_value=FAKE_IMAGE_CHAT_GEMINI):
|
||||
resp = await client.post(
|
||||
"/generate/image",
|
||||
json={"model": "google/gemini-2.5-flash-image",
|
||||
"prompt": "A mountain"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
image_id = resp.json()["images"][0]["image_id"]
|
||||
assert image_id is not None
|
||||
|
||||
row = db_module.get_conn().execute(
|
||||
"SELECT model_id, prompt, image_data FROM generated_images WHERE id = ?",
|
||||
[image_id],
|
||||
).fetchone()
|
||||
assert row is not None
|
||||
assert row[0] == "google/gemini-2.5-flash-image"
|
||||
assert row[1] == "A mountain"
|
||||
assert row[2] == "data:image/png;base64,gemini123"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -254,7 +358,7 @@ async def test_generate_image_chat_upstream_error(client):
|
||||
|
||||
async def test_generate_video(client):
|
||||
token = await _user_token(client)
|
||||
with patch("backend.app.routers.generate.openrouter.generate_video", new_callable=AsyncMock, return_value=FAKE_VIDEO):
|
||||
with patch("app.routers.generate.openrouter.generate_video", new_callable=AsyncMock, return_value=FAKE_VIDEO):
|
||||
resp = await client.post(
|
||||
"/generate/video",
|
||||
json={"model": "stability/stable-video",
|
||||
@@ -276,7 +380,7 @@ async def test_generate_video_unauthenticated(client):
|
||||
|
||||
async def test_generate_video_upstream_error(client):
|
||||
token = await _user_token(client)
|
||||
with patch("backend.app.routers.generate.openrouter.generate_video", new_callable=AsyncMock, side_effect=Exception("503")):
|
||||
with patch("app.routers.generate.openrouter.generate_video", new_callable=AsyncMock, side_effect=Exception("503")):
|
||||
resp = await client.post(
|
||||
"/generate/video",
|
||||
json={"model": "stability/stable-video", "prompt": "Hi"},
|
||||
@@ -291,7 +395,7 @@ async def test_generate_video_upstream_error(client):
|
||||
|
||||
async def test_generate_video_from_image(client):
|
||||
token = await _user_token(client)
|
||||
with patch("backend.app.routers.generate.openrouter.generate_video_from_image", new_callable=AsyncMock, return_value=FAKE_VIDEO_DONE):
|
||||
with patch("app.routers.generate.openrouter.generate_video_from_image", new_callable=AsyncMock, return_value=FAKE_VIDEO_DONE):
|
||||
resp = await client.post(
|
||||
"/generate/video/from-image",
|
||||
json={
|
||||
@@ -315,7 +419,7 @@ async def test_poll_video_status(client):
|
||||
"status": "completed",
|
||||
"unsigned_urls": ["https://example.com/video.mp4"],
|
||||
}
|
||||
with patch("backend.app.routers.generate.openrouter.poll_video_status", new_callable=AsyncMock, return_value=mock_result):
|
||||
with patch("app.routers.generate.openrouter.poll_video_status", new_callable=AsyncMock, return_value=mock_result):
|
||||
resp = await client.get(
|
||||
"/generate/video/status",
|
||||
params={"polling_url": "https://openrouter.ai/api/v1/videos/gen-vid-1"},
|
||||
@@ -337,7 +441,7 @@ async def test_poll_video_status_unauthenticated(client):
|
||||
|
||||
async def test_poll_video_status_upstream_error(client):
|
||||
token = await _user_token(client)
|
||||
with patch("backend.app.routers.generate.openrouter.poll_video_status", new_callable=AsyncMock, side_effect=Exception("timeout")):
|
||||
with patch("app.routers.generate.openrouter.poll_video_status", new_callable=AsyncMock, side_effect=Exception("timeout")):
|
||||
resp = await client.get(
|
||||
"/generate/video/status",
|
||||
params={"polling_url": "https://openrouter.ai/api/v1/videos/gen-vid-1"},
|
||||
@@ -354,9 +458,130 @@ async def test_generate_video_from_image_unauthenticated(client):
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Video job DB storage
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_generate_video_stored_in_db(client):
|
||||
"""Submitting a video job inserts a row into generated_videos."""
|
||||
token = await _user_token(client)
|
||||
with patch("app.routers.generate.openrouter.generate_video", new_callable=AsyncMock, return_value=FAKE_VIDEO):
|
||||
resp = await client.post(
|
||||
"/generate/video",
|
||||
json={"model": "stability/stable-video", "prompt": "Ocean waves"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
|
||||
row = db_module.get_conn().execute(
|
||||
"SELECT job_id, model_id, prompt, status FROM generated_videos WHERE job_id = ?",
|
||||
["gen-vid-1"],
|
||||
).fetchone()
|
||||
assert row is not None
|
||||
assert row[0] == "gen-vid-1"
|
||||
assert row[1] == "stability/stable-video"
|
||||
assert row[2] == "Ocean waves"
|
||||
assert row[3] == "queued"
|
||||
|
||||
|
||||
async def test_generate_video_from_image_stored_in_db(client):
|
||||
"""Submitting a from-image job inserts a row into generated_videos."""
|
||||
token = await _user_token(client)
|
||||
with patch("app.routers.generate.openrouter.generate_video_from_image", new_callable=AsyncMock, return_value=FAKE_VIDEO_DONE):
|
||||
resp = await client.post(
|
||||
"/generate/video/from-image",
|
||||
json={
|
||||
"model": "runway/gen-3",
|
||||
"image_url": "https://example.com/cat.jpg",
|
||||
"prompt": "Cat runs",
|
||||
},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
|
||||
row = db_module.get_conn().execute(
|
||||
"SELECT job_id, model_id, prompt, status FROM generated_videos WHERE job_id = ?",
|
||||
["gen-vid-2"],
|
||||
).fetchone()
|
||||
assert row is not None
|
||||
assert row[1] == "runway/gen-3"
|
||||
assert row[2] == "Cat runs"
|
||||
|
||||
|
||||
async def test_poll_video_updates_db_on_completion(client):
|
||||
"""Polling a completed job updates the row status and video_url."""
|
||||
token = await _user_token(client)
|
||||
# First submit a job
|
||||
with patch("app.routers.generate.openrouter.generate_video", new_callable=AsyncMock, return_value=FAKE_VIDEO):
|
||||
await client.post(
|
||||
"/generate/video",
|
||||
json={"model": "stability/stable-video", "prompt": "Test"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
|
||||
# Now poll and get completed status
|
||||
mock_result = {
|
||||
"id": "gen-vid-1",
|
||||
"status": "completed",
|
||||
"unsigned_urls": ["https://example.com/video.mp4"],
|
||||
}
|
||||
with patch("app.routers.generate.openrouter.poll_video_status", new_callable=AsyncMock, return_value=mock_result):
|
||||
await client.get(
|
||||
"/generate/video/status",
|
||||
params={"polling_url": "https://openrouter.ai/api/v1/videos/gen-vid-1"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
|
||||
row = db_module.get_conn().execute(
|
||||
"SELECT status, video_url FROM generated_videos WHERE job_id = ?",
|
||||
["gen-vid-1"],
|
||||
).fetchone()
|
||||
assert row is not None
|
||||
assert row[0] == "completed"
|
||||
assert row[1] == "https://example.com/video.mp4"
|
||||
|
||||
|
||||
async def test_list_generated_videos_empty(client):
|
||||
"""GET /generate/videos returns empty list initially."""
|
||||
token = await _user_token(client)
|
||||
resp = await client.get(
|
||||
"/generate/videos",
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
assert resp.json() == []
|
||||
|
||||
|
||||
async def test_list_generated_videos_returns_own_jobs(client):
|
||||
"""GET /generate/videos returns the current user's jobs only."""
|
||||
token = await _user_token(client)
|
||||
with patch("app.routers.generate.openrouter.generate_video", new_callable=AsyncMock, return_value=FAKE_VIDEO):
|
||||
await client.post(
|
||||
"/generate/video",
|
||||
json={"model": "stability/stable-video", "prompt": "Waves"},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
|
||||
resp = await client.get(
|
||||
"/generate/videos",
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert len(data) == 1
|
||||
assert data[0]["job_id"] == "gen-vid-1"
|
||||
assert data[0]["prompt"] == "Waves"
|
||||
assert data[0]["status"] == "queued"
|
||||
|
||||
|
||||
async def test_list_generated_videos_unauthenticated(client):
|
||||
resp = await client.get("/generate/videos")
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
async def test_generate_video_from_image_upstream_error(client):
|
||||
token = await _user_token(client)
|
||||
with patch("backend.app.routers.generate.openrouter.generate_video_from_image", new_callable=AsyncMock, side_effect=Exception("error")):
|
||||
with patch("app.routers.generate.openrouter.generate_video_from_image", new_callable=AsyncMock, side_effect=Exception("error")):
|
||||
resp = await client.post(
|
||||
"/generate/video/from-image",
|
||||
json={"model": "runway/gen-3",
|
||||
|
||||
@@ -0,0 +1,184 @@
|
||||
"""Tests for image upload and retrieval endpoints."""
|
||||
import io
|
||||
import os
|
||||
import pytest
|
||||
import pytest_asyncio
|
||||
from httpx import AsyncClient, ASGITransport
|
||||
|
||||
from app.main import app
|
||||
from app import db as db_module
|
||||
|
||||
os.environ.setdefault("JWT_SECRET", "test-secret-key-for-testing-only")
|
||||
# Use a temp dir so file I/O works without polluting project data/
|
||||
os.environ.setdefault("UPLOAD_DIR", "/tmp/test_uploads")
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def fresh_db():
|
||||
db_module._conn = None
|
||||
db_module.init_db(":memory:")
|
||||
yield
|
||||
db_module.close_db()
|
||||
db_module._conn = None
|
||||
|
||||
|
||||
@pytest_asyncio.fixture
|
||||
async def client(fresh_db):
|
||||
transport = ASGITransport(app=app)
|
||||
async with AsyncClient(transport=transport, base_url="http://test") as ac:
|
||||
yield ac
|
||||
|
||||
|
||||
async def _user_token(client) -> str:
|
||||
await client.post("/auth/register", json={"email": "user@example.com", "password": "secret123"})
|
||||
resp = await client.post("/auth/login", json={"email": "user@example.com", "password": "secret123"})
|
||||
return resp.json()["access_token"]
|
||||
|
||||
|
||||
async def _other_token(client) -> str:
|
||||
await client.post("/auth/register", json={"email": "other@example.com", "password": "secret123"})
|
||||
resp = await client.post("/auth/login", json={"email": "other@example.com", "password": "secret123"})
|
||||
return resp.json()["access_token"]
|
||||
|
||||
|
||||
def _png_bytes() -> bytes:
|
||||
"""Minimal valid 1x1 PNG."""
|
||||
return (
|
||||
b"\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00\x01\x00\x00\x00\x01"
|
||||
b"\x08\x02\x00\x00\x00\x90wS\xde\x00\x00\x00\x0cIDATx\x9cc\xf8\x0f\x00"
|
||||
b"\x00\x01\x01\x00\x05\x18\xd4n\x00\x00\x00\x00IEND\xaeB`\x82"
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# POST /images/upload
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_upload_image_success(client):
|
||||
token = await _user_token(client)
|
||||
resp = await client.post(
|
||||
"/images/upload",
|
||||
files={"file": ("test.png", io.BytesIO(_png_bytes()), "image/png")},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 201
|
||||
data = resp.json()
|
||||
assert data["filename"] == "test.png"
|
||||
assert data["content_type"] == "image/png"
|
||||
assert "id" in data
|
||||
assert data["size_bytes"] > 0
|
||||
|
||||
|
||||
async def test_upload_image_unauthenticated(client):
|
||||
resp = await client.post(
|
||||
"/images/upload",
|
||||
files={"file": ("test.png", io.BytesIO(_png_bytes()), "image/png")},
|
||||
)
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
async def test_upload_image_unsupported_type(client):
|
||||
token = await _user_token(client)
|
||||
resp = await client.post(
|
||||
"/images/upload",
|
||||
files={"file": ("doc.pdf", io.BytesIO(
|
||||
b"%PDF-fake"), "application/pdf")},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 415
|
||||
|
||||
|
||||
async def test_upload_image_too_large(client, monkeypatch):
|
||||
import app.routers.images as images_mod
|
||||
monkeypatch.setattr(images_mod, "MAX_SIZE_BYTES", 5)
|
||||
token = await _user_token(client)
|
||||
resp = await client.post(
|
||||
"/images/upload",
|
||||
files={"file": ("big.png", io.BytesIO(b"x" * 10), "image/png")},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 413
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# GET /images/
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_list_images_empty(client):
|
||||
token = await _user_token(client)
|
||||
resp = await client.get("/images/", headers={"Authorization": f"Bearer {token}"})
|
||||
assert resp.status_code == 200
|
||||
assert resp.json() == []
|
||||
|
||||
|
||||
async def test_list_images_returns_own_only(client):
|
||||
token = await _user_token(client)
|
||||
other = await _other_token(client)
|
||||
|
||||
# Upload one image as user, one as other
|
||||
for tok, name in [(token, "mine.png"), (other, "theirs.png")]:
|
||||
await client.post(
|
||||
"/images/upload",
|
||||
files={"file": (name, io.BytesIO(_png_bytes()), "image/png")},
|
||||
headers={"Authorization": f"Bearer {tok}"},
|
||||
)
|
||||
|
||||
resp = await client.get("/images/", headers={"Authorization": f"Bearer {token}"})
|
||||
assert resp.status_code == 200
|
||||
items = resp.json()
|
||||
assert len(items) == 1
|
||||
assert items[0]["filename"] == "mine.png"
|
||||
|
||||
|
||||
async def test_list_images_unauthenticated(client):
|
||||
resp = await client.get("/images/")
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# GET /images/{id}/file
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_serve_image_success(client):
|
||||
token = await _user_token(client)
|
||||
up = await client.post(
|
||||
"/images/upload",
|
||||
files={"file": ("pixel.png", io.BytesIO(_png_bytes()), "image/png")},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
image_id = up.json()["id"]
|
||||
|
||||
resp = await client.get(
|
||||
f"/images/{image_id}/file",
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
assert resp.headers["content-type"].startswith("image/png")
|
||||
assert resp.content == _png_bytes()
|
||||
|
||||
|
||||
async def test_serve_image_wrong_user(client):
|
||||
token = await _user_token(client)
|
||||
other = await _other_token(client)
|
||||
|
||||
up = await client.post(
|
||||
"/images/upload",
|
||||
files={"file": ("secret.png", io.BytesIO(_png_bytes()), "image/png")},
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
image_id = up.json()["id"]
|
||||
|
||||
resp = await client.get(
|
||||
f"/images/{image_id}/file",
|
||||
headers={"Authorization": f"Bearer {other}"},
|
||||
)
|
||||
assert resp.status_code == 403
|
||||
|
||||
|
||||
async def test_serve_image_not_found(client):
|
||||
token = await _user_token(client)
|
||||
resp = await client.get(
|
||||
"/images/00000000-0000-0000-0000-000000000000/file",
|
||||
headers={"Authorization": f"Bearer {token}"},
|
||||
)
|
||||
assert resp.status_code == 404
|
||||
@@ -0,0 +1,366 @@
|
||||
"""Tests for model cache service and router."""
|
||||
import json
|
||||
import os
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
import pytest_asyncio
|
||||
from httpx import ASGITransport, AsyncClient
|
||||
|
||||
from app import db as db_module
|
||||
from app.main import app
|
||||
from app.services.models import (
|
||||
_extract_output_modality,
|
||||
_normalize_modality,
|
||||
_parse_modality,
|
||||
get_cached_models,
|
||||
get_model_output_modalities,
|
||||
is_cache_stale,
|
||||
refresh_models_cache,
|
||||
)
|
||||
|
||||
os.environ.setdefault("JWT_SECRET", "test-secret-key-for-testing-only")
|
||||
os.environ.setdefault("OPENROUTER_API_KEY", "test-key")
|
||||
|
||||
FAKE_MODELS_RAW = [
|
||||
{
|
||||
"id": "openai/gpt-4o",
|
||||
"name": "GPT-4o",
|
||||
"context_length": 128000,
|
||||
"pricing": {"prompt": "0.000005"},
|
||||
"architecture": {"modality": "text->text", "output_modalities": ["text"]},
|
||||
},
|
||||
{
|
||||
"id": "anthropic/claude-3-haiku",
|
||||
"name": "Claude 3 Haiku",
|
||||
"context_length": 200000,
|
||||
"pricing": {},
|
||||
"architecture": {"modality": "text+image->text", "output_modalities": ["text"]},
|
||||
},
|
||||
{
|
||||
"id": "openai/dall-e-3",
|
||||
"name": "DALL-E 3",
|
||||
"context_length": None,
|
||||
"pricing": {"image": "0.04"},
|
||||
"architecture": {"modality": "text->image", "output_modalities": ["image"]},
|
||||
},
|
||||
{
|
||||
"id": "openai/sora-2",
|
||||
"name": "Sora 2",
|
||||
"context_length": None,
|
||||
"pricing": {"video": "0.10"},
|
||||
"architecture": {"modality": "text->video", "output_modalities": ["video"]},
|
||||
},
|
||||
{
|
||||
"id": "google/gemini-2.5-flash-image",
|
||||
"name": "Gemini 2.5 Flash Image",
|
||||
"context_length": None,
|
||||
"pricing": {},
|
||||
"architecture": {"output_modalities": ["image", "text"]},
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def fresh_db():
|
||||
db_module._conn = None
|
||||
db_module.init_db(":memory:")
|
||||
yield
|
||||
db_module.close_db()
|
||||
db_module._conn = None
|
||||
|
||||
|
||||
@pytest_asyncio.fixture
|
||||
async def client(fresh_db):
|
||||
transport = ASGITransport(app=app)
|
||||
async with AsyncClient(transport=transport, base_url="http://test") as ac:
|
||||
yield ac
|
||||
|
||||
|
||||
async def _register_login(client, email, password, is_admin=False):
|
||||
"""Register + login; optionally promote to admin directly in DB."""
|
||||
await client.post("/auth/register", json={"email": email, "password": password})
|
||||
if is_admin:
|
||||
db_module.get_conn().execute(
|
||||
"UPDATE users SET role = 'admin' WHERE email = ?", [email]
|
||||
)
|
||||
resp = await client.post("/auth/login", json={"email": email, "password": password})
|
||||
return resp.json()["access_token"]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Unit tests: _parse_modality
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_parse_modality_text():
|
||||
assert _parse_modality("text->text") == "text"
|
||||
|
||||
|
||||
def test_parse_modality_multimodal_input_text_output():
|
||||
assert _parse_modality("text+image->text") == "text"
|
||||
|
||||
|
||||
def test_parse_modality_image():
|
||||
assert _parse_modality("text->image") == "image"
|
||||
|
||||
|
||||
def test_parse_modality_video():
|
||||
assert _parse_modality("text->video") == "video"
|
||||
|
||||
|
||||
def test_parse_modality_audio():
|
||||
assert _parse_modality("text->audio") == "audio"
|
||||
|
||||
|
||||
def test_parse_modality_no_arrow_fallback():
|
||||
assert _parse_modality("text") == "text"
|
||||
|
||||
|
||||
def test_normalize_embedding_alias():
|
||||
assert _normalize_modality("embedding") == "embeddings"
|
||||
|
||||
|
||||
def test_extract_output_modality_prefers_output_modalities():
|
||||
model = {
|
||||
"architecture": {
|
||||
"modality": "text->text",
|
||||
"output_modalities": ["image"],
|
||||
}
|
||||
}
|
||||
assert _extract_output_modality(model) == "image"
|
||||
|
||||
|
||||
def test_extract_output_modality_legacy_fallback():
|
||||
model = {"architecture": {"modality": "text->audio"}}
|
||||
assert _extract_output_modality(model) == "audio"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Unit tests: is_cache_stale
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_cache_stale_when_empty():
|
||||
conn = db_module.get_conn()
|
||||
assert is_cache_stale(conn) is True
|
||||
|
||||
|
||||
def test_cache_not_stale_after_fresh_insert():
|
||||
conn = db_module.get_conn()
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
conn.execute(
|
||||
"INSERT INTO models_cache (model_id, name, modality, fetched_at) VALUES (?, ?, ?, ?)",
|
||||
["openai/gpt-4o", "GPT-4o", "text", now],
|
||||
)
|
||||
assert is_cache_stale(conn) is False
|
||||
|
||||
|
||||
def test_cache_stale_after_ttl_exceeded():
|
||||
conn = db_module.get_conn()
|
||||
# Store naive UTC to match DuckDB TIMESTAMP behaviour
|
||||
old_time = datetime.now(timezone.utc).replace(
|
||||
tzinfo=None) - timedelta(hours=25)
|
||||
conn.execute(
|
||||
"INSERT INTO models_cache (model_id, name, modality, fetched_at) VALUES (?, ?, ?, ?)",
|
||||
["openai/gpt-4o", "GPT-4o", "text", old_time],
|
||||
)
|
||||
assert is_cache_stale(conn) is True
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Unit tests: refresh_models_cache + get_cached_models
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_refresh_stores_models():
|
||||
conn = db_module.get_conn()
|
||||
with patch(
|
||||
"app.services.models.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
return_value=FAKE_MODELS_RAW,
|
||||
):
|
||||
count = await refresh_models_cache(conn)
|
||||
assert count == 5
|
||||
all_models = get_cached_models(conn)
|
||||
assert len(all_models) == 5
|
||||
|
||||
|
||||
async def test_refresh_replaces_old_cache():
|
||||
conn = db_module.get_conn()
|
||||
old_time = datetime.now(timezone.utc).replace(
|
||||
tzinfo=None) - timedelta(hours=30)
|
||||
conn.execute(
|
||||
"INSERT INTO models_cache (model_id, name, modality, fetched_at) VALUES (?, ?, ?, ?)",
|
||||
["old/model", "Old Model", "text", old_time],
|
||||
)
|
||||
with patch(
|
||||
"app.services.models.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
return_value=FAKE_MODELS_RAW,
|
||||
):
|
||||
await refresh_models_cache(conn)
|
||||
ids = [m["id"] for m in get_cached_models(conn)]
|
||||
assert "old/model" not in ids
|
||||
assert "openai/gpt-4o" in ids
|
||||
assert len(ids) == 5
|
||||
|
||||
|
||||
def test_get_cached_models_filter_by_modality():
|
||||
conn = db_module.get_conn()
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
for m in FAKE_MODELS_RAW:
|
||||
modality = _extract_output_modality(m)
|
||||
conn.execute(
|
||||
"INSERT INTO models_cache (model_id, name, modality, fetched_at) VALUES (?, ?, ?, ?)",
|
||||
[m["id"], m["name"], modality, now],
|
||||
)
|
||||
text_models = get_cached_models(conn, modality="text")
|
||||
# gpt-4o, claude-3-haiku (gemini has output_modalities=["image","text"] → classified as "image")
|
||||
assert len(text_models) == 2
|
||||
assert all(m["modality"] == "text" for m in text_models)
|
||||
|
||||
image_models = get_cached_models(conn, modality="image")
|
||||
# dall-e-3 + gemini (output_modalities starts with image)
|
||||
assert len(image_models) == 2
|
||||
image_ids = [m["id"] for m in image_models]
|
||||
assert "openai/dall-e-3" in image_ids
|
||||
assert "google/gemini-2.5-flash-image" in image_ids
|
||||
|
||||
video_models = get_cached_models(conn, modality="video")
|
||||
assert len(video_models) == 1
|
||||
assert video_models[0]["id"] == "openai/sora-2"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Integration tests: GET /models/
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_list_models_endpoint_auto_refreshes(client):
|
||||
token = await _register_login(client, "user@example.com", "secret123")
|
||||
with patch(
|
||||
"app.services.models.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
return_value=FAKE_MODELS_RAW,
|
||||
) as mock_fetch:
|
||||
resp = await client.get(
|
||||
"/models/", headers={"Authorization": f"Bearer {token}"}
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
assert len(resp.json()) == 5
|
||||
assert mock_fetch.await_count >= 1
|
||||
|
||||
|
||||
async def test_list_models_endpoint_uses_cache(client):
|
||||
token = await _register_login(client, "user@example.com", "secret123")
|
||||
conn = db_module.get_conn()
|
||||
now = datetime.now(timezone.utc).replace(tzinfo=None)
|
||||
conn.execute(
|
||||
"INSERT INTO models_cache (model_id, name, modality, fetched_at) VALUES (?, ?, ?, ?)",
|
||||
["cached/model", "Cached Model", "text", now],
|
||||
)
|
||||
with patch(
|
||||
"app.services.models.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
) as mock_fetch:
|
||||
resp = await client.get(
|
||||
"/models/?modality=text", headers={"Authorization": f"Bearer {token}"}
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
assert resp.json()[0]["id"] == "cached/model"
|
||||
mock_fetch.assert_not_awaited()
|
||||
|
||||
|
||||
async def test_list_models_endpoint_requires_auth(client):
|
||||
resp = await client.get("/models/")
|
||||
assert resp.status_code == 401
|
||||
|
||||
|
||||
async def test_list_models_filter_by_modality(client):
|
||||
token = await _register_login(client, "user@example.com", "secret123")
|
||||
with patch(
|
||||
"app.services.models.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
return_value=FAKE_MODELS_RAW,
|
||||
):
|
||||
resp = await client.get(
|
||||
"/models/?modality=image", headers={"Authorization": f"Bearer {token}"}
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert len(data) == 2 # dall-e-3 + gemini-2.5-flash-image
|
||||
image_ids = [m["id"] for m in data]
|
||||
assert "openai/dall-e-3" in image_ids
|
||||
assert "google/gemini-2.5-flash-image" in image_ids
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Integration tests: POST /models/refresh
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_refresh_endpoint_requires_admin(client):
|
||||
token = await _register_login(client, "user@example.com", "secret123")
|
||||
resp = await client.post(
|
||||
"/models/refresh", headers={"Authorization": f"Bearer {token}"}
|
||||
)
|
||||
assert resp.status_code == 403
|
||||
|
||||
|
||||
async def test_refresh_endpoint_admin_succeeds(client):
|
||||
token = await _register_login(client, "admin@example.com", "secret123", is_admin=True)
|
||||
with patch(
|
||||
"app.services.models.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
return_value=FAKE_MODELS_RAW,
|
||||
):
|
||||
resp = await client.post(
|
||||
"/models/refresh", headers={"Authorization": f"Bearer {token}"}
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
assert resp.json()["refreshed"] == 5
|
||||
|
||||
|
||||
async def test_refresh_endpoint_502_on_openrouter_error(client):
|
||||
token = await _register_login(client, "admin@example.com", "secret123", is_admin=True)
|
||||
with patch(
|
||||
"app.services.models.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
side_effect=RuntimeError("network error"),
|
||||
):
|
||||
resp = await client.post(
|
||||
"/models/refresh", headers={"Authorization": f"Bearer {token}"}
|
||||
)
|
||||
assert resp.status_code == 502
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Unit tests: get_model_output_modalities
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def test_get_model_output_modalities_image_only():
|
||||
conn = db_module.get_conn()
|
||||
with patch(
|
||||
"app.services.models.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
return_value=FAKE_MODELS_RAW,
|
||||
):
|
||||
await refresh_models_cache(conn)
|
||||
modalities = get_model_output_modalities(conn, "openai/dall-e-3")
|
||||
assert modalities == ["image"]
|
||||
|
||||
|
||||
async def test_get_model_output_modalities_image_text():
|
||||
conn = db_module.get_conn()
|
||||
with patch(
|
||||
"app.services.models.openrouter.list_models",
|
||||
new_callable=AsyncMock,
|
||||
return_value=FAKE_MODELS_RAW,
|
||||
):
|
||||
await refresh_models_cache(conn)
|
||||
modalities = get_model_output_modalities(
|
||||
conn, "google/gemini-2.5-flash-image")
|
||||
assert set(modalities) == {"image", "text"}
|
||||
|
||||
|
||||
def test_get_model_output_modalities_unknown_model():
|
||||
conn = db_module.get_conn()
|
||||
result = get_model_output_modalities(conn, "unknown/model")
|
||||
assert result == []
|
||||
@@ -4,8 +4,8 @@ import pytest
|
||||
import pytest_asyncio
|
||||
from httpx import AsyncClient, ASGITransport
|
||||
|
||||
from backend.app.main import app
|
||||
from backend.app import db as db_module
|
||||
from app.main import app
|
||||
from app import db as db_module
|
||||
|
||||
os.environ.setdefault("JWT_SECRET", "test-secret-key-for-testing-only")
|
||||
|
||||
@@ -115,7 +115,9 @@ async def test_list_users_as_admin(client):
|
||||
resp = await client.get("/users", headers={"Authorization": f"Bearer {admin_token}"})
|
||||
assert resp.status_code == 200
|
||||
assert isinstance(resp.json(), list)
|
||||
assert len(resp.json()) == 1
|
||||
assert len(resp.json()) >= 1
|
||||
emails = [u["email"] for u in resp.json()]
|
||||
assert "user@example.com" in emails
|
||||
|
||||
|
||||
async def test_list_users_as_regular_user(client):
|
||||
|
||||
@@ -4,7 +4,8 @@ Describes the relevant requirements and the driving forces that software archite
|
||||
|
||||
## Requirements Overview
|
||||
|
||||
**Project name**: AI Allucanget Biz
|
||||
**Project name**: All You Can GET AI
|
||||
**URL**: [https://ai.allucanget.biz](https://ai.allucanget.biz)
|
||||
**Purpose**: Provide AI‑powered text, image, and video generation services via a web application.
|
||||
|
||||
Users can choose between different AI models for:
|
||||
@@ -14,6 +15,8 @@ Users can choose between different AI models for:
|
||||
- Text‑to‑video generation
|
||||
- Image‑to‑video generation
|
||||
|
||||
Users can create accounts, log in, and view their generation history in a gallery. An admin dashboard allows managing users, models, and video generation jobs.
|
||||
|
||||
## Quality Goals
|
||||
|
||||
| Priority | Quality Goal | Scenario |
|
||||
|
||||
@@ -22,5 +22,5 @@ Any requirement that constrains software architects in their freedom of design a
|
||||
|
||||
| Convention | Background / Motivation |
|
||||
| -------------------- | --------------------------------------------------- |
|
||||
| Python 3.11+ | Modern language features, type hints |
|
||||
| Python 3.12+ | Modern language features, type hints |
|
||||
| pytest for all tests | Consistent test tooling across backend and frontend |
|
||||
|
||||
@@ -5,21 +5,21 @@ Static decomposition of the system into building blocks (modules, components, su
|
||||
## Level 1 – Whitebox Overall System
|
||||
|
||||
```text
|
||||
┌───────────────────────┐
|
||||
┌────────────────────────┐
|
||||
│ Frontend (Flask) │
|
||||
└───────┬───────────────┘
|
||||
└───────┬────────────────┘
|
||||
│ REST API calls
|
||||
┌───────▼───────────────┐
|
||||
┌───────▼────────────────┐
|
||||
│ FastAPI Backend │
|
||||
│ ├─ Auth Service │
|
||||
│ ├─ User Service │
|
||||
│ ├─ AI Service │
|
||||
│ └─ DB Service (DuckDB)│
|
||||
└───────┬───────────────┘
|
||||
└───────┬────────────────┘
|
||||
│ DB access
|
||||
┌───────▼───────────────┐
|
||||
┌───────▼────────────────┐
|
||||
│ DuckDB Database │
|
||||
└───────────────────────┘
|
||||
└────────────────────────┘
|
||||
```
|
||||
|
||||
**Motivation:** Separating the UI (Flask) from the API (FastAPI) allows independent scaling and testing of each layer.
|
||||
@@ -66,17 +66,25 @@ Self-service profile management and admin user CRUD.
|
||||
Operational endpoints for application management.
|
||||
|
||||
| Method | Path | Auth required | Admin only | Description |
|
||||
| ------ | --------------------- | ------------- | ---------- | ------------------------------------- |
|
||||
| ------ | --------------------------- | ------------- | ---------- | ------------------------------------------ |
|
||||
| GET | `/admin/stats` | ✓ | ✓ | User counts by role, token activity |
|
||||
| GET | `/admin/health/db` | ✓ | ✓ | DuckDB connectivity check |
|
||||
| POST | `/admin/tokens/purge` | ✓ | ✓ | Remove expired/revoked refresh tokens |
|
||||
| GET | `/admin/videos` | ✓ | ✓ | List all video jobs with user emails |
|
||||
| POST | `/admin/videos/{id}/cancel` | ✓ | ✓ | Cancel a queued/processing video job |
|
||||
| POST | `/admin/videos/{id}/retry` | ✓ | ✓ | Retry a failed/cancelled video job |
|
||||
| DELETE | `/admin/videos/{id}` | ✓ | ✓ | Permanently delete a video job |
|
||||
| POST | `/admin/videos/purge` | ✓ | ✓ | Delete old completed/failed/cancelled jobs |
|
||||
| POST | `/admin/videos/timed-out` | ✓ | ✓ | Mark stale processing jobs as failed |
|
||||
| GET | `/admin/models` | ✓ | ✓ | List cached OpenRouter models |
|
||||
| POST | `/admin/models/refresh` | ✓ | ✓ | Refresh model cache from OpenRouter |
|
||||
|
||||
### White Box AI Service (`/ai`, `/generate`)
|
||||
|
||||
Model listing and multi-modal generation via openrouter.ai.
|
||||
|
||||
| Method | Path | Auth required | Description |
|
||||
| ------ | ---------------------------- | ------------- | ------------------------------------------------------------------------------------------------------------------- |
|
||||
| ------ | ------------------------------ | ------------- | ------------------------------------------------------------------------------------------------------------------- |
|
||||
| GET | `/ai/models` | ✓ | List available OpenRouter models |
|
||||
| POST | `/ai/chat` | ✓ | Multi-turn chat completion |
|
||||
| POST | `/generate/text` | ✓ | Single-prompt text generation (optional system prompt) |
|
||||
@@ -84,10 +92,15 @@ Model listing and multi-modal generation via openrouter.ai.
|
||||
| POST | `/generate/video` | ✓ | Text-to-video (Sora 2 Pro, Veo 3.1 Fast) — returns `polling_url` |
|
||||
| POST | `/generate/video/from-image` | ✓ | Image-to-video — returns `polling_url` |
|
||||
| GET | `/generate/video/status` | ✓ | Poll video generation status via `polling_url` |
|
||||
| GET | `/generate/images` | ✓ | List current user's generated images |
|
||||
| GET | `/generate/images/{id}` | ✓ | Get a single generated image |
|
||||
| GET | `/generate/videos` | ✓ | List current user's video jobs |
|
||||
| GET | `/generate/videos/{id}` | ✓ | Get a single video job |
|
||||
| POST | `/generate/videos/{id}/cancel` | ✓ | Cancel a queued/processing video job |
|
||||
|
||||
**Video generation flow:** The `/generate/video` and `/generate/video/from-image` endpoints submit a job to OpenRouter's `/api/v1/videos` endpoint and return immediately with `status: "queued"` and a `polling_url`. Clients poll `/generate/video/status?polling_url=...` every 5 seconds until `status` is `"completed"` (returns `unsigned_urls`) or `"failed"`.
|
||||
**Video generation flow:** The `/generate/video` and `/generate/video/from-image` endpoints queue a job in the local database and return immediately with `status: "queued"`. A background worker (`video_worker.py`) submits the job to OpenRouter's `/api/v1/videos` endpoint, receives a `polling_url`, and polls it periodically until the job reaches `"completed"` or `"failed"`. The frontend polls `GET /generate/video/{id}/status` every 5 seconds to show live status updates.
|
||||
|
||||
**Image generation routing:** The router auto-detects the model type — models containing `"flux"` or `"gpt-5-image-mini"` are routed to `/chat/completions` with `modalities: ["image"]`, while others (e.g. DALL-E 3) use the legacy `/images/generations` endpoint.
|
||||
**Image generation routing:** The router auto-detects the model type — models containing `"flux"` or `"gpt-5-image-mini"` are routed to `/chat/completions` with `modalities: ["image"]` (or `["image", "text"]` depending on cached output modalities), while others (e.g. DALL-E 3) use the legacy `/images/generations` endpoint.
|
||||
|
||||
### White Box DB Service (`db.py`)
|
||||
|
||||
|
||||
+20
-9
@@ -48,20 +48,31 @@ Describes concrete behavior and interactions of the system's building blocks in
|
||||
1. User submits video generation form with prompt, model, aspect ratio, resolution, and duration
|
||||
2. Flask POSTs to `POST /generate/video` with JWT header
|
||||
3. Auth Service validates JWT
|
||||
4. Backend calls OpenRouter `POST /api/v1/videos` with model, prompt, aspect_ratio, resolution, duration_seconds
|
||||
5. OpenRouter returns `{"id": "...", "polling_url": "..."}` with `status: "queued"`
|
||||
6. FastAPI returns `VideoResponse` with `polling_url` to Flask
|
||||
7. Flask renders result page with polling UI
|
||||
8. Frontend JavaScript polls `GET /generate/video/status?polling_url=...` every 5 seconds
|
||||
9. When `status` becomes `"completed"`, the response includes `unsigned_urls` — the video is displayed in a `<video>` element
|
||||
10. If `status` becomes `"failed"`, an error message is shown
|
||||
4. Backend inserts a row into `generated_videos` with `status: "queued"` and returns the DB job ID
|
||||
5. Flask renders result page with polling UI
|
||||
6. Background worker (`video_worker.py`) picks up queued jobs every 15 seconds:
|
||||
- Calls OpenRouter `POST /api/v1/videos` with model, prompt, and parameters
|
||||
- Receives `{"id": "...", "polling_url": "..."}` and updates the DB row to `status: "processing"`
|
||||
- Polls the `polling_url` every 15 seconds until `status` is `"completed"` or `"failed"`
|
||||
- Updates the DB row with the final status and video URL
|
||||
7. Frontend JavaScript polls `GET /generate/video/{db_id}/status` every 5 seconds
|
||||
8. When `status` becomes `"completed"`, the response includes `video_url` — the video is displayed in a `<video>` element
|
||||
9. If `status` becomes `"failed"`, an error message is shown
|
||||
10. User can click "Cancel Job" to mark the job as `"cancelled"` (stops local polling, does not stop the provider job)
|
||||
|
||||
## Scenario 4a: Video Generation (Image-to-Video)
|
||||
|
||||
1. User provides an image URL, motion prompt, model, aspect ratio, resolution, and duration
|
||||
2. Flask POSTs to `POST /generate/video/from-image` with JWT header
|
||||
3. Backend calls OpenRouter `POST /api/v1/videos` with `image_url`, prompt, and parameters
|
||||
4. Same polling flow as Scenario 4
|
||||
3. Same background worker flow as Scenario 4, with `generation_type: "image_to_video"`
|
||||
|
||||
## Scenario 4b: Video Job Cancellation
|
||||
|
||||
1. User clicks "Cancel Job" on the video detail page or gallery pending card
|
||||
2. Frontend POSTs to `/generate/video/{id}/cancel`
|
||||
3. Backend verifies the job belongs to the user and is not in a terminal state
|
||||
4. Backend updates the DB row `status` to `"cancelled"`
|
||||
5. Frontend stops polling and updates the UI to show "Job cancelled"
|
||||
|
||||
## Scenario 5: Token Refresh
|
||||
|
||||
|
||||
+61
-22
@@ -5,40 +5,79 @@ Describes:
|
||||
1. Technical infrastructure used to execute your system, with infrastructure elements like geographical locations, environments, computers, processors, channels and net topologies.
|
||||
2. Mapping of (software) building blocks to that infrastructure elements.
|
||||
|
||||
**See**: [Coolify Deployment Guide](./deployment/coolify.md) for detailed instructions.
|
||||
|
||||
## Infrastructure Level 1
|
||||
|
||||
```text
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ Host / VM │
|
||||
│ ┌─────────────┐ ┌─────────────────────┐ │
|
||||
│ │ frontend │ │ backend │ │
|
||||
│ │ (Flask) │ │ (FastAPI) │ │
|
||||
│ │ :5000 │ │ :8000 │ │
|
||||
│ └──────┬──────┘ └──────────┬──────────┘ │
|
||||
│ │ │ │
|
||||
│ └────────┬──────────┘ │
|
||||
│ │ │
|
||||
│ ┌───────▼────────┐ │
|
||||
│ │ db (DuckDB) │ │
|
||||
│ │ data/app.db │ │
|
||||
│ └────────────────┘ │
|
||||
└─────────────────────────────────────────────┘
|
||||
Hosted on a single VM running docker containers, deployed via Coolify with Nixpacks to 192.168.88.18 for production.
|
||||
|
||||
Containers run behind nginx at 192.168.88.11 which handles TLS termination and reverse proxying to the frontend on port 12016 and backend on port 12015. The database is a file on the host filesystem at `data/app.db` accessed by the backend service.
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
Users[Users / Internet]
|
||||
Nginx[nginx reverse proxy\nTLS termination]
|
||||
Users -->|HTTPS| Nginx
|
||||
|
||||
subgraph Coolify Server
|
||||
direction TB
|
||||
subgraph AI Frontend
|
||||
AI_Frontend[AI Frontend\nFlask\nServes HTML/CSS/JS UI]
|
||||
end
|
||||
subgraph AI Backend
|
||||
AI_Backend[AI Backend\nFastAPI\nCommunicates with openrouter.ai API]
|
||||
db[(DuckDB Database\nFile: data/app.db)]
|
||||
AI_Backend --> db
|
||||
end
|
||||
AI_Frontend -->|BACKEND_URL:12015| AI_Backend
|
||||
end
|
||||
Nginx -->|12016| AI_Frontend
|
||||
```
|
||||
|
||||
**Motivation:** All three components run on a single VM (or as Docker containers) for simplicity and low operational overhead.
|
||||
**Motivation:** All three components run as Docker containers for simplicity and low operational overhead.
|
||||
|
||||
**Quality and/or Performance Features:** The frontend and backend are stateless; DuckDB persists data on the host filesystem.
|
||||
|
||||
**Mapping of Building Blocks to Infrastructure:**
|
||||
|
||||
| Building Block | Container / Process | Port |
|
||||
| --------------- | ---------------------------- | ---- |
|
||||
| Flask frontend | `frontend` | 5000 |
|
||||
| FastAPI backend | `backend` | 8000 |
|
||||
| --------------- | ---------------------------- | --------------- |
|
||||
| Nginx | `nginx` | 80/443 (public) |
|
||||
| Coolify Server | `coolify` | — |
|
||||
| Flask frontend | `frontend` | 12016 |
|
||||
| FastAPI backend | `backend` | 12015 |
|
||||
| DuckDB | File on host (`data/app.db`) | — |
|
||||
|
||||
## Infrastructure Level 2
|
||||
|
||||
### Docker Compose (alternative)
|
||||
### Coolify with Nixpacks (Production)
|
||||
|
||||
All three services can be run with `docker compose up`. The `backend` mounts the `data/` volume for DuckDB persistence.
|
||||
Both services are deployed as separate Nixpacks resources in Coolify, which results in two separate containers running on the same host. The database is a file on the host filesystem, mounted as a volume in the backend container.
|
||||
|
||||
#### Frontend
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph Coolify Server
|
||||
direction TB
|
||||
subgraph AI Frontend
|
||||
AI_Frontend[AI Frontend\nNixpacks\nBase Dir: /frontend]
|
||||
end
|
||||
end
|
||||
Users[Users / Internet] -->|HTTPS| AI_Frontend
|
||||
```
|
||||
|
||||
#### Backend
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph Coolify Server
|
||||
direction TB
|
||||
subgraph AI Backend
|
||||
AI_Backend[AI Backend\nNixpacks\nBase Dir: /backend]
|
||||
db[(DuckDB Database\nVolume: /app/data)]
|
||||
AI_Backend --> db
|
||||
end
|
||||
end
|
||||
Frontend[Frontend Container] -->|BACKEND_URL:12015| AI_Backend
|
||||
```
|
||||
|
||||
@@ -4,6 +4,14 @@ Describes crosscutting concepts (practices, patterns, regulations or solution id
|
||||
|
||||
> Pick **only** the most-needed topics for your system.
|
||||
|
||||
## OpenRouter API Integration
|
||||
|
||||
see [docs/8.1-openrouter.md](./8.1-openrouter.md) for details on how the backend integrates with OpenRouter for multi-modal AI generation, including image and video generation flows.
|
||||
|
||||
## DuckDB Concurrency and Storage
|
||||
|
||||
See [docs/8.2-duckdb.md](./8.2-duckdb.md) for details on how the backend handles concurrent access to DuckDB and manages the database file on the host filesystem.
|
||||
|
||||
## Security
|
||||
|
||||
- All API endpoints (except `/auth/login`) require a valid JWT in the `Authorization: Bearer` header.
|
||||
@@ -25,72 +33,3 @@ Describes crosscutting concepts (practices, patterns, regulations or solution id
|
||||
|
||||
- All secrets (API keys, DB path, JWT secret) loaded from environment variables or `.env` file.
|
||||
- No secrets committed to source control.
|
||||
|
||||
## DuckDB Concurrency and Storage
|
||||
|
||||
### Single Writer Per Process
|
||||
|
||||
DuckDB allows only one process to open the database file in read-write mode at a time. The FastAPI backend must be run with a single worker (`uvicorn --workers 1`). Running multiple workers against the same DuckDB file will cause startup errors.
|
||||
|
||||
### asyncio.Lock for Writes
|
||||
|
||||
All database write operations (`INSERT`, `UPDATE`, `DELETE`) in the FastAPI async context are wrapped in a single `asyncio.Lock` (`get_write_lock()` from `backend/app/db.py`). This prevents concurrent coroutines from issuing overlapping writes within the single process, which would otherwise raise DuckDB optimistic concurrency errors.
|
||||
|
||||
Read operations (`SELECT`) do not require the lock — DuckDB's MVCC provides consistent read snapshots.
|
||||
|
||||
### Schema
|
||||
|
||||
```sql
|
||||
CREATE TABLE users (
|
||||
id UUID DEFAULT uuid() PRIMARY KEY,
|
||||
email VARCHAR NOT NULL UNIQUE,
|
||||
password_hash VARCHAR NOT NULL,
|
||||
role VARCHAR DEFAULT 'user',
|
||||
created_at TIMESTAMP DEFAULT now(),
|
||||
updated_at TIMESTAMP DEFAULT now()
|
||||
);
|
||||
|
||||
CREATE TABLE refresh_tokens (
|
||||
jti UUID DEFAULT uuid() PRIMARY KEY,
|
||||
user_id UUID NOT NULL, -- soft FK to users.id
|
||||
issued_at TIMESTAMP DEFAULT now(),
|
||||
expires_at TIMESTAMP NOT NULL,
|
||||
revoked BOOLEAN DEFAULT false
|
||||
);
|
||||
```
|
||||
|
||||
> The `REFERENCES users(id)` foreign key is intentionally omitted from `refresh_tokens`. DuckDB fires FK checks on `UPDATE` of the parent table (including email changes), causing false constraint violations. Referential integrity is enforced manually: deleting a user also deletes their refresh tokens in the same write transaction.
|
||||
|
||||
### Access Tokens
|
||||
|
||||
Access tokens are **stateless** JWTs — not stored in the database. They are validated by signature and expiry claim only. The short TTL (15 minutes) limits the blast radius if a token is leaked.
|
||||
|
||||
### Refresh Tokens
|
||||
|
||||
Refresh tokens store a JTI (JWT ID) UUID in the `refresh_tokens` table. On each use the old JTI is revoked and a new one issued (rotation). On logout the JTI is immediately revoked. Expired and revoked tokens can be purged via `POST /admin/tokens/purge`.
|
||||
|
||||
### Future: AI Generation History
|
||||
|
||||
AI generation metadata (model, prompt, cost, result URLs) can be stored as JSON columns in a future `generation_history` table in DuckDB, enabling per-user analytics and usage dashboards at zero extra infrastructure cost.
|
||||
|
||||
## OpenRouter API Integration
|
||||
|
||||
### Image Generation
|
||||
|
||||
Image generation uses two different OpenRouter endpoints depending on the model:
|
||||
|
||||
- **Legacy endpoint** (`/images/generations`): Used by DALL-E 3 and similar models. Returns `data[].url` and `data[].b64_json`.
|
||||
- **Chat completions** (`/chat/completions` with `modalities: ["image"]`): Used by FLUX.2 Klein 4B and GPT-5 Image Mini. Returns `choices[0].message.images[].image_url.url` as base64 data URLs.
|
||||
|
||||
The router auto-detects the model type and routes accordingly. Image configuration (`aspect_ratio`, `image_size`) is passed via `image_config` for chat-based models.
|
||||
|
||||
### Video Generation
|
||||
|
||||
Video generation uses OpenRouter's `/api/v1/videos` endpoint with a **submit-and-poll** pattern:
|
||||
|
||||
1. `POST /api/v1/videos` with `model`, `prompt`, `aspect_ratio`, `resolution`, `duration_seconds`
|
||||
2. Response: `{"id": "job_id", "polling_url": "https://..."}` with `status: "queued"`
|
||||
3. Poll `GET polling_url` every 5 seconds until `status` is `"completed"` or `"failed"`
|
||||
4. Completed response includes `unsigned_urls: [str]` array with video download URLs
|
||||
|
||||
Supported models: `openai/sora-2-pro`, `google/veo-3.1-fast`. Both text-to-video and image-to-video use the same `/api/v1/videos` endpoint (image-to-video includes `image_url` in the request body).
|
||||
|
||||
@@ -0,0 +1,31 @@
|
||||
# OpenRouter API Integration
|
||||
|
||||
## Text Generation
|
||||
|
||||
> [!warning]
|
||||
> TODO: Add more details on how the backend integrates with OpenRouter for text generation, including chat completions and single-prompt generation flows.
|
||||
|
||||
## Image Generation
|
||||
|
||||
Image generation uses two different OpenRouter endpoints depending on the model:
|
||||
|
||||
- **Legacy endpoint** (`/images/generations`): Used by DALL-E 3 and similar models. Returns `data[].url` and `data[].b64_json`.
|
||||
- **Chat completions** (`/chat/completions` with `modalities: ["image"]`): Used by FLUX.2 Klein 4B and GPT-5 Image Mini. Returns `choices[0].message.images[].image_url.url` as base64 data URLs.
|
||||
|
||||
The router auto-detects the model type and routes accordingly. Image configuration (`aspect_ratio`, `image_size`) is passed via `image_config` for chat-based models.
|
||||
|
||||
## Video Generation
|
||||
|
||||
Video generation uses OpenRouter's `/api/v1/videos` endpoint with a **submit-and-poll** pattern orchestrated by a background worker:
|
||||
|
||||
1. User submits a video request via `POST /generate/video` (or `/generate/video/from-image`)
|
||||
2. Backend inserts a row into `generated_videos` with `status: "queued"` and returns immediately
|
||||
3. Background worker (`video_worker.py`) picks up queued jobs every 15 seconds:
|
||||
- Calls `POST /api/v1/videos` with `model`, `prompt`, `aspect_ratio`, `resolution`, `duration`
|
||||
- Receives `{"id": "job_id", "polling_url": "https://..."}` and updates DB to `status: "processing"`
|
||||
- Polls `GET polling_url` every 15 seconds until `status` is `"completed"` or `"failed"`
|
||||
- Updates DB with final status, `video_url`, and any `error` message
|
||||
4. Frontend polls `GET /generate/video/{db_id}/status` every 5 seconds to show live updates
|
||||
5. Completed response includes `video_url` — the video is displayed in a `<video>` element
|
||||
|
||||
Supported models: `openai/sora-2-pro`, `google/veo-3.1-fast`. Both text-to-video and image-to-video use the same `/api/v1/videos` endpoint (image-to-video includes `frame_images` with `first_frame` in the request body).
|
||||
@@ -0,0 +1,46 @@
|
||||
# DuckDB Concurrency and Storage
|
||||
|
||||
## Single Writer Per Process
|
||||
|
||||
DuckDB allows only one process to open the database file in read-write mode at a time. The FastAPI backend must be run with a single worker (`uvicorn --workers 1`). Running multiple workers against the same DuckDB file will cause startup errors.
|
||||
|
||||
## asyncio.Lock for Writes
|
||||
|
||||
All database write operations (`INSERT`, `UPDATE`, `DELETE`) in the FastAPI async context are wrapped in a single `asyncio.Lock` (`get_write_lock()` from `backend/app/db.py`). This prevents concurrent coroutines from issuing overlapping writes within the single process, which would otherwise raise DuckDB optimistic concurrency errors.
|
||||
|
||||
Read operations (`SELECT`) do not require the lock — DuckDB's MVCC provides consistent read snapshots.
|
||||
|
||||
## Schema
|
||||
|
||||
```sql
|
||||
CREATE TABLE users (
|
||||
id UUID DEFAULT uuid() PRIMARY KEY,
|
||||
email VARCHAR NOT NULL UNIQUE,
|
||||
password_hash VARCHAR NOT NULL,
|
||||
role VARCHAR DEFAULT 'user',
|
||||
created_at TIMESTAMP DEFAULT now(),
|
||||
updated_at TIMESTAMP DEFAULT now()
|
||||
);
|
||||
|
||||
CREATE TABLE refresh_tokens (
|
||||
jti UUID DEFAULT uuid() PRIMARY KEY,
|
||||
user_id UUID NOT NULL, -- soft FK to users.id
|
||||
issued_at TIMESTAMP DEFAULT now(),
|
||||
expires_at TIMESTAMP NOT NULL,
|
||||
revoked BOOLEAN DEFAULT false
|
||||
);
|
||||
```
|
||||
|
||||
> The `REFERENCES users(id)` foreign key is intentionally omitted from `refresh_tokens`. DuckDB fires FK checks on `UPDATE` of the parent table (including email changes), causing false constraint violations. Referential integrity is enforced manually: deleting a user also deletes their refresh tokens in the same write transaction.
|
||||
|
||||
## Access Tokens
|
||||
|
||||
Access tokens are **stateless** JWTs — not stored in the database. They are validated by signature and expiry claim only. The short TTL (15 minutes) limits the blast radius if a token is leaked.
|
||||
|
||||
## Refresh Tokens
|
||||
|
||||
Refresh tokens store a JTI (JWT ID) UUID in the `refresh_tokens` table. On each use the old JTI is revoked and a new one issued (rotation). On logout the JTI is immediately revoked. Expired and revoked tokens can be purged via `POST /admin/tokens/purge`.
|
||||
|
||||
## Future: AI Generation History
|
||||
|
||||
AI generation metadata (model, prompt, cost, result URLs) can be stored as JSON columns in a future `generation_history` table in DuckDB, enabling per-user analytics and usage dashboards at zero extra infrastructure cost.
|
||||
@@ -1,6 +1,6 @@
|
||||
# Architecture Documentation
|
||||
|
||||
This file is the entry point for the architecture documentation of **AI Allucanget Biz**.
|
||||
This file is the entry point for the architecture documentation of **All You Can GET AI Biz**.
|
||||
|
||||
The documentation follows the [arc42 template](https://arc42.org/overview) and is split into 12 section files, each covering a specific aspect of the architecture. Read the sections in order for a full picture, or jump directly to the section most relevant to you.
|
||||
|
||||
|
||||
+35
-57
@@ -1,20 +1,20 @@
|
||||
# Coolify Deployment Guide
|
||||
|
||||
This guide covers deploying `ai.allucanget.biz` using [Coolify](https://coolify.io) from the repository `https://git.allucanget.biz/allucanget/ai.allucanget.biz.git`.
|
||||
This guide covers deploying `ai.allucanget.biz` using [Coolify](https://coolify.io) with Nixpacks from the repository `https://git.allucanget.biz/allucanget/ai.allucanget.biz.git`.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
The application consists of two Python services:
|
||||
|
||||
| Service | Framework | Port | Description |
|
||||
| -------- | ----------------- | ---- | ------------------------------------------ |
|
||||
| Backend | FastAPI + uvicorn | 8000 | REST API, auth, AI generation, DuckDB |
|
||||
| Frontend | Flask + gunicorn | 5000 | SSR web UI, session auth, proxy to backend |
|
||||
| -------- | ----------------- | ----- | ------------------------------------------ |
|
||||
| Backend | FastAPI + uvicorn | 12015 | REST API, auth, AI generation, DuckDB |
|
||||
| Frontend | Flask + gunicorn | 12016 | SSR web UI, session auth, proxy to backend |
|
||||
|
||||
Coolify's built-in reverse proxy routes traffic:
|
||||
|
||||
- `/api/*` → Backend (port 8000)
|
||||
- `/` → Frontend (port 5000)
|
||||
- `/api/*` → Backend (port 12015)
|
||||
- `/` → Frontend (port 12016)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
@@ -29,14 +29,18 @@ Coolify's built-in reverse proxy routes traffic:
|
||||
3. Select the `ai.allucanget.biz` repository
|
||||
4. Choose the `main` branch
|
||||
5. Set **Build Pack** to `nixpacks`
|
||||
6. Set **Base Directory** to `/backend`
|
||||
7. Set **Ports Exposed** to `8000`
|
||||
6. Set **Base Directory** to `/backend` - this tells Nixpacks to look in the `backend/` subdirectory for `requirements.txt` and the Python application
|
||||
7. Set **Ports Exposed** to `12015`
|
||||
8. Set **Start Command** to:
|
||||
|
||||
```txt
|
||||
uvicorn backend.app.main:app --host 0.0.0.0 --port 8000
|
||||
uvicorn app.main:app --host 0.0.0.0 --port 12015
|
||||
```
|
||||
|
||||
9. Click **Create Resource**
|
||||
|
||||
> **Important:** Nixpacks copies the **contents** of the Base Directory to `/app/` in the container. When Base Directory is `/backend`, the `backend/` folder wrapper is removed — only `app/`, `tests/`, and `requirements.txt` are copied. Therefore the start command uses `app.main:app` (not `backend.app.main:app`).
|
||||
|
||||
### Backend Environment Variables
|
||||
|
||||
Add these as **Runtime** environment variables in Coolify:
|
||||
@@ -46,7 +50,7 @@ Add these as **Runtime** environment variables in Coolify:
|
||||
| `OPENROUTER_API_KEY` | OpenRouter API key for AI generation | `sk-or-v1-...` |
|
||||
| `JWT_SECRET` | Secret key for JWT token signing | Generate with `openssl rand -hex 32` |
|
||||
| `APP_URL` | Public URL of the backend | `https://api.ai.allucanget.biz` |
|
||||
| `APP_NAME` | Application name | `AI Allucanget` |
|
||||
| `APP_NAME` | Application name | `All You Can GET AI` |
|
||||
| `CORS_ORIGINS` | Comma-separated allowed origins | `https://ai.allucanget.biz` |
|
||||
|
||||
## Step 2: Create Frontend Service
|
||||
@@ -55,22 +59,27 @@ Add these as **Runtime** environment variables in Coolify:
|
||||
2. Select the same repository
|
||||
3. Choose the `main` branch
|
||||
4. Set **Build Pack** to `nixpacks`
|
||||
5. Set **Base Directory** to `/frontend`
|
||||
6. Set **Ports Exposed** to `5000`
|
||||
5. Set **Base Directory** to `/frontend` - this tells Nixpacks to look in the `frontend/` subdirectory for `requirements.txt` and the Python application
|
||||
6. Set **Ports Exposed** to `12016`
|
||||
7. Set **Start Command** to:
|
||||
|
||||
```txt
|
||||
gunicorn frontend.app.main:app --bind 0.0.0.0:5000 --workers 2 --timeout 120
|
||||
gunicorn app.main:app --bind 0.0.0.0:12016 --workers 2 --timeout 120
|
||||
```
|
||||
|
||||
8. Click **Create Resource**
|
||||
|
||||
> **Note:** Nixpacks will automatically detect and install only the production dependencies from `requirements.txt`.
|
||||
> **Important:** Nixpacks copies the **contents** of the Base Directory to `/app/` in the container. When Base Directory is `/frontend`, the `frontend/` folder wrapper is removed — only `app/`, `tests/`, and `requirements.txt` are copied. Therefore the start command uses `app.main:app` (not `frontend.app.main:app`).
|
||||
|
||||
### Frontend Environment Variables
|
||||
|
||||
Add these as **Runtime** environment variables in Coolify:
|
||||
|
||||
| Variable | Description | Example |
|
||||
| ------------------ | ----------------------------------------- | -------------------------------------------------------------- |
|
||||
| ------------------ | ----------------------------------------- | --------------------------------------------------------------- |
|
||||
| `FLASK_SECRET_KEY` | Flask session cookie signing key | Generate with `openssl rand -hex 32` |
|
||||
| `BACKEND_URL` | Internal URL to reach the backend service | `http://localhost:8000` (or use Coolify's internal networking) |
|
||||
| `BACKEND_URL` | Internal URL to reach the backend service | `http://localhost:12015` (or use Coolify's internal networking) |
|
||||
|
||||
## Step 3: Configure Reverse Proxy
|
||||
|
||||
@@ -79,42 +88,15 @@ Coolify provides a built-in reverse proxy. Configure routing rules:
|
||||
### Backend Proxy Rules
|
||||
|
||||
- **Domain**: `api.ai.allucanget.biz` (or subdomain of your choice)
|
||||
- **Port**: `8000`
|
||||
- **Port**: `12015`
|
||||
- **Path**: `/api/*` → forward to backend
|
||||
|
||||
### Frontend Proxy Rules
|
||||
|
||||
- **Domain**: `ai.allucanget.biz`
|
||||
- **Port**: `5000`
|
||||
- **Port**: `12016`
|
||||
- **Path**: `/` → forward to frontend
|
||||
|
||||
### Nginx Configuration (Optional)
|
||||
|
||||
If you need custom Nginx configuration, create `nginx/coolify.conf`:
|
||||
|
||||
```nginx
|
||||
# Reverse proxy configuration for Coolify
|
||||
# This file is for reference — Coolify's built-in proxy handles routing
|
||||
|
||||
# Backend API proxy
|
||||
location /api/ {
|
||||
proxy_pass http://backend:8000;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
|
||||
# Frontend proxy
|
||||
location / {
|
||||
proxy_pass http://frontend:5000;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
```
|
||||
|
||||
## Step 4: SSL/TLS
|
||||
|
||||
Enable HTTPS in Coolify for both services:
|
||||
@@ -137,6 +119,12 @@ If you want to persist DuckDB data:
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Backend healthcheck stays unhealthy
|
||||
|
||||
- Check backend logs in Coolify
|
||||
- Verify `OPENROUTER_API_KEY` and `JWT_SECRET` are set
|
||||
- Verify volume mount at `/app/data` is writable
|
||||
|
||||
### Backend won't start
|
||||
|
||||
- Check that `OPENROUTER_API_KEY` is set
|
||||
@@ -146,7 +134,7 @@ If you want to persist DuckDB data:
|
||||
### Frontend can't reach backend
|
||||
|
||||
- Ensure `BACKEND_URL` points to the correct internal URL
|
||||
- If both services are on the same Coolify server, use `http://localhost:8000`
|
||||
- If both services are on the same Coolify server, use `http://localhost:12015`
|
||||
- Check that the backend service is running and healthy
|
||||
|
||||
### CORS errors
|
||||
@@ -165,11 +153,11 @@ If you want to persist DuckDB data:
|
||||
All required environment variables:
|
||||
|
||||
| Variable | Service | Required |
|
||||
| -------------------- | -------- | -------------------------------- |
|
||||
| -------------------- | -------- | ------------------------------------- |
|
||||
| `OPENROUTER_API_KEY` | Backend | Yes |
|
||||
| `JWT_SECRET` | Backend | Yes |
|
||||
| `APP_URL` | Backend | Yes |
|
||||
| `APP_NAME` | Backend | No (defaults to "AI Allucanget") |
|
||||
| `APP_NAME` | Backend | No (defaults to "All You Can GET AI") |
|
||||
| `CORS_ORIGINS` | Backend | Yes |
|
||||
| `FLASK_SECRET_KEY` | Frontend | Yes |
|
||||
| `BACKEND_URL` | Frontend | Yes |
|
||||
@@ -185,13 +173,3 @@ All required environment variables:
|
||||
- [ ] Domain names configured
|
||||
- [ ] Health checks passing
|
||||
- [ ] Logs reviewed for errors
|
||||
|
||||
## Nixpacks Configuration
|
||||
|
||||
The project includes Nixpacks configuration files for both services:
|
||||
|
||||
- `nixpacks.toml` — Shared configuration (Python version, packages)
|
||||
- `backend/nixpacks.toml` — Backend-specific (uvicorn, port 8000)
|
||||
- `frontend/nixpacks.toml` — Frontend-specific (gunicorn, port 5000)
|
||||
|
||||
Nixpacks automatically detects Python projects and installs dependencies from `requirements.txt`. No additional configuration is needed for basic deployments.
|
||||
|
||||
@@ -0,0 +1,21 @@
|
||||
FROM python:3.12-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install system dependencies
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
gcc \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy requirements and install Python dependencies
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy application code
|
||||
COPY . .
|
||||
|
||||
# Expose port
|
||||
EXPOSE 12016
|
||||
|
||||
# Run the application
|
||||
CMD ["gunicorn", "app.main:app", "--bind", "0.0.0.0:12016", "--workers", "2", "--timeout", "120"]
|
||||
@@ -3,7 +3,8 @@ import os
|
||||
|
||||
|
||||
class Config:
|
||||
SECRET_KEY = os.getenv("FLASK_SECRET_KEY", "dev-secret-change-in-production")
|
||||
BACKEND_URL = os.getenv("BACKEND_URL", "http://localhost:8000")
|
||||
SECRET_KEY = os.getenv(
|
||||
"FLASK_SECRET_KEY", "dev-secret-change-in-production")
|
||||
BACKEND_URL = os.getenv("BACKEND_URL", "http://localhost:12015")
|
||||
SESSION_COOKIE_HTTPONLY = True
|
||||
SESSION_COOKIE_SAMESITE = "Lax"
|
||||
|
||||
+323
-14
@@ -1,9 +1,11 @@
|
||||
"""Flask frontend application."""
|
||||
import functools
|
||||
from datetime import datetime, timezone
|
||||
|
||||
import httpx
|
||||
from flask import (
|
||||
Flask,
|
||||
Response,
|
||||
flash,
|
||||
jsonify,
|
||||
redirect,
|
||||
@@ -13,7 +15,7 @@ from flask import (
|
||||
url_for,
|
||||
)
|
||||
|
||||
from frontend.app.config import Config
|
||||
from .config import Config
|
||||
|
||||
app = Flask(__name__)
|
||||
app.config.from_object(Config)
|
||||
@@ -23,6 +25,42 @@ app.config.from_object(Config)
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
@app.template_filter("fromisoformat")
|
||||
def from_iso_format(s: str) -> datetime:
|
||||
"""Convert ISO 8601 string to datetime object."""
|
||||
return datetime.fromisoformat(s)
|
||||
|
||||
|
||||
@app.template_filter("humantime")
|
||||
def human_time(dt: datetime) -> str:
|
||||
"""Format a datetime object into a human-readable relative time."""
|
||||
now = datetime.now(timezone.utc)
|
||||
# Ensure dt is aware for comparison
|
||||
if dt.tzinfo is None:
|
||||
dt = dt.replace(tzinfo=timezone.utc)
|
||||
|
||||
diff = now - dt
|
||||
seconds = diff.total_seconds()
|
||||
|
||||
if seconds < 60:
|
||||
return "just now"
|
||||
elif seconds < 3600:
|
||||
minutes = int(seconds / 60)
|
||||
return f"{minutes} minute{'s' if minutes > 1 else ''} ago"
|
||||
elif seconds < 86400:
|
||||
hours = int(seconds / 3600)
|
||||
return f"{hours} hour{'s' if hours > 1 else ''} ago"
|
||||
elif seconds < 2592000:
|
||||
days = int(seconds / 86400)
|
||||
return f"{days} day{'s' if days > 1 else ''} ago"
|
||||
elif seconds < 31536000:
|
||||
months = int(seconds / 2592000)
|
||||
return f"{months} month{'s' if months > 1 else ''} ago"
|
||||
else:
|
||||
years = int(seconds / 31536000)
|
||||
return f"{years} year{'s' if years > 1 else ''} ago"
|
||||
|
||||
|
||||
def _backend(path: str) -> str:
|
||||
return f"{app.config['BACKEND_URL']}{path}"
|
||||
|
||||
@@ -34,6 +72,60 @@ def _api(method: str, path: str, *, token: str | None = None, **kwargs):
|
||||
return httpx.request(method, _backend(path), headers=headers, timeout=30, **kwargs)
|
||||
|
||||
|
||||
def _model_matches_modality(model: dict, modality: str) -> bool:
|
||||
"""Heuristic fallback when backend modality filter returns empty."""
|
||||
model_modality = (model.get("modality") or "").lower()
|
||||
if model_modality == modality:
|
||||
return True
|
||||
|
||||
text = f"{model.get('id', '')} {model.get('name', '')}".lower()
|
||||
keywords = {
|
||||
"image": ["image", "dall-e", "flux", "stable-diffusion", "sdxl", "recraft", "ideogram", "gpt-image"],
|
||||
"video": ["video", "sora", "runway", "veo", "kling", "pika", "luma", "wan"],
|
||||
"audio": ["audio", "speech", "voice", "tts", "transcribe", "whisper"],
|
||||
}
|
||||
|
||||
if modality in keywords:
|
||||
return any(k in text for k in keywords[modality])
|
||||
|
||||
if modality == "text":
|
||||
non_text_hits = any(
|
||||
k in text for k in keywords["image"] + keywords["video"] + keywords["audio"])
|
||||
return not non_text_hits
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def _load_models(token: str, modality: str) -> list[dict]:
|
||||
"""Load models for modality; fallback to unfiltered cache if needed."""
|
||||
try:
|
||||
models_resp = _api("GET", "/models/", token=token,
|
||||
params={"modality": modality})
|
||||
except httpx.RequestError:
|
||||
return []
|
||||
if models_resp.status_code == 200:
|
||||
try:
|
||||
models = models_resp.json()
|
||||
except ValueError:
|
||||
models = []
|
||||
if models:
|
||||
return models
|
||||
|
||||
try:
|
||||
all_resp = _api("GET", "/models/", token=token)
|
||||
except httpx.RequestError:
|
||||
return []
|
||||
if all_resp.status_code != 200:
|
||||
return []
|
||||
|
||||
try:
|
||||
all_models = all_resp.json()
|
||||
except ValueError:
|
||||
return []
|
||||
filtered = [m for m in all_models if _model_matches_modality(m, modality)]
|
||||
return filtered or all_models
|
||||
|
||||
|
||||
def login_required(view):
|
||||
@functools.wraps(view)
|
||||
def wrapped(*args, **kwargs):
|
||||
@@ -121,11 +213,96 @@ def dashboard():
|
||||
token = session["access_token"]
|
||||
resp = _api("GET", "/users/me", token=token)
|
||||
user = resp.json() if resp.status_code == 200 else {}
|
||||
return render_template("dashboard.html", user=user)
|
||||
img_resp = _api("GET", "/images/", token=token)
|
||||
images = img_resp.json() if img_resp.status_code == 200 else []
|
||||
gen_resp = _api("GET", "/generate/images", token=token)
|
||||
generated_images = gen_resp.json() if gen_resp.status_code == 200 else []
|
||||
|
||||
vid_resp = _api("GET", "/generate/videos", token=token)
|
||||
videos = vid_resp.json() if vid_resp.status_code == 200 else []
|
||||
pending_videos = [v for v in videos if v.get(
|
||||
"status") not in ("completed", "failed")]
|
||||
completed_videos = [v for v in videos if v.get("status") == "completed"]
|
||||
|
||||
return render_template("dashboard.html", user=user, images=images,
|
||||
generated_images=generated_images,
|
||||
pending_videos=pending_videos,
|
||||
completed_videos=completed_videos)
|
||||
|
||||
|
||||
@app.get("/gallery")
|
||||
@login_required
|
||||
def gallery():
|
||||
token = session["access_token"]
|
||||
|
||||
# Fetch all content types
|
||||
uploads_resp = _api("GET", "/images/", token=token)
|
||||
uploads = uploads_resp.json() if uploads_resp.status_code == 200 else []
|
||||
|
||||
gen_images_resp = _api("GET", "/generate/images", token=token)
|
||||
generated_images = gen_images_resp.json(
|
||||
) if gen_images_resp.status_code == 200 else []
|
||||
|
||||
videos_resp = _api("GET", "/generate/videos", token=token)
|
||||
videos = videos_resp.json() if videos_resp.status_code == 200 else []
|
||||
|
||||
# Separate pending videos
|
||||
pending_videos = [v for v in videos if v.get(
|
||||
"status") not in ("completed", "failed")]
|
||||
completed_videos = [v for v in videos if v.get("status") == "completed"]
|
||||
|
||||
return render_template(
|
||||
"gallery.html",
|
||||
uploads=uploads,
|
||||
generated_images=generated_images,
|
||||
pending_videos=pending_videos,
|
||||
completed_videos=completed_videos,
|
||||
)
|
||||
|
||||
|
||||
@app.get("/gallery/image/<image_id>")
|
||||
@login_required
|
||||
def image_detail(image_id: str):
|
||||
token = session["access_token"]
|
||||
resp = _api("GET", f"/generate/images/{image_id}", token=token)
|
||||
image = resp.json() if resp.status_code == 200 else None
|
||||
return render_template("image_detail.html", image=image)
|
||||
|
||||
|
||||
@app.get("/gallery/video/<video_id>")
|
||||
@login_required
|
||||
def video_detail(video_id: str):
|
||||
token = session["access_token"]
|
||||
resp = _api("GET", f"/generate/videos/{video_id}", token=token)
|
||||
video = resp.json() if resp.status_code == 200 else None
|
||||
return render_template("video_detail.html", video=video)
|
||||
|
||||
|
||||
@app.get("/gallery/upload/<image_id>")
|
||||
@login_required
|
||||
def upload_detail(image_id: str):
|
||||
token = session["access_token"]
|
||||
resp = _api("GET", f"/images/{image_id}", token=token)
|
||||
image = resp.json() if resp.status_code == 200 else None
|
||||
return render_template("upload_detail.html", image=image)
|
||||
|
||||
|
||||
# ── Generate ──────────────────────────────────────────────────────────────
|
||||
|
||||
@app.get("/images/<image_id>/file")
|
||||
@login_required
|
||||
def serve_uploaded_image(image_id: str):
|
||||
resp = _api("GET", f"/images/{image_id}/file",
|
||||
token=session["access_token"])
|
||||
if resp.status_code != 200:
|
||||
return Response("Not found", status=404)
|
||||
return Response(
|
||||
resp.content,
|
||||
status=200,
|
||||
content_type=resp.headers.get("content-type", "image/jpeg"),
|
||||
)
|
||||
|
||||
|
||||
@app.get("/generate")
|
||||
@login_required
|
||||
def generate():
|
||||
@@ -135,25 +312,87 @@ def generate():
|
||||
@app.route("/generate/text", methods=["GET", "POST"])
|
||||
@login_required
|
||||
def generate_text():
|
||||
result = error = None
|
||||
error = None
|
||||
token = session["access_token"]
|
||||
chat_history: list[dict] = session.get("chat_history", [])
|
||||
system_prompt: str = session.get("chat_system_prompt", "")
|
||||
model: str = session.get("chat_model", "")
|
||||
|
||||
if request.method == "POST":
|
||||
resp = _api("POST", "/generate/text", token=session["access_token"], json={
|
||||
"model": request.form.get("model", "").strip(),
|
||||
"prompt": request.form.get("prompt", "").strip(),
|
||||
})
|
||||
action = request.form.get("action", "send")
|
||||
|
||||
if action == "clear":
|
||||
session.pop("chat_history", None)
|
||||
session.pop("chat_system_prompt", None)
|
||||
session.pop("chat_model", None)
|
||||
return redirect(url_for("generate_text"))
|
||||
|
||||
prompt = request.form.get("prompt", "").strip()
|
||||
model = request.form.get("model", "").strip()
|
||||
system_prompt = request.form.get("system_prompt", "").strip()
|
||||
|
||||
# Persist model + system_prompt across turns
|
||||
session["chat_model"] = model
|
||||
session["chat_system_prompt"] = system_prompt
|
||||
|
||||
if prompt:
|
||||
# Build messages: history (user/assistant only) + new user msg
|
||||
messages = [m for m in chat_history if m["role"]
|
||||
in ("user", "assistant")]
|
||||
messages.append({"role": "user", "content": prompt})
|
||||
|
||||
payload: dict = {
|
||||
"model": model,
|
||||
"messages": [{"role": m["role"], "content": m["content"]} for m in messages],
|
||||
}
|
||||
if system_prompt:
|
||||
payload["system_prompt"] = system_prompt
|
||||
|
||||
resp = _api("POST", "/generate/text", token=token, json=payload)
|
||||
if resp.status_code == 200:
|
||||
result = resp.json()
|
||||
data = resp.json()
|
||||
chat_history = list(messages)
|
||||
chat_history.append({"role": "assistant", "content": data["content"],
|
||||
"usage": data.get("usage")})
|
||||
session["chat_history"] = chat_history
|
||||
else:
|
||||
try:
|
||||
error = resp.json().get("detail", "Generation failed.")
|
||||
return render_template("generate_text.html", result=result, error=error)
|
||||
except Exception:
|
||||
error = "Generation failed."
|
||||
|
||||
models = _load_models(token, "text")
|
||||
return render_template(
|
||||
"generate_text.html",
|
||||
chat_history=session.get("chat_history", []),
|
||||
error=error,
|
||||
models=models,
|
||||
system_prompt=system_prompt,
|
||||
current_model=model,
|
||||
)
|
||||
|
||||
|
||||
@app.route("/generate/image", methods=["GET", "POST"])
|
||||
@login_required
|
||||
def generate_image():
|
||||
result = error = None
|
||||
token = session["access_token"]
|
||||
if request.method == "POST":
|
||||
resp = _api("POST", "/generate/image", token=session["access_token"], json={
|
||||
# Upload reference image if provided
|
||||
ref_file = request.files.get("reference_image")
|
||||
if ref_file and ref_file.filename:
|
||||
up_resp = _api(
|
||||
"POST", "/images/upload",
|
||||
token=token,
|
||||
files={"file": (ref_file.filename,
|
||||
ref_file.stream, ref_file.content_type)},
|
||||
)
|
||||
if up_resp.status_code not in (200, 201):
|
||||
error = up_resp.json().get("detail", "Image upload failed.")
|
||||
models = _load_models(token, "image")
|
||||
return render_template("generate_image.html", result=result, error=error, models=models)
|
||||
|
||||
resp = _api("POST", "/generate/image", token=token, json={
|
||||
"model": request.form.get("model", "").strip(),
|
||||
"prompt": request.form.get("prompt", "").strip(),
|
||||
"n": int(request.form.get("n", 1)),
|
||||
@@ -165,20 +404,22 @@ def generate_image():
|
||||
result = resp.json()
|
||||
else:
|
||||
error = resp.json().get("detail", "Generation failed.")
|
||||
return render_template("generate_image.html", result=result, error=error)
|
||||
models = _load_models(token, "image")
|
||||
return render_template("generate_image.html", result=result, error=error, models=models)
|
||||
|
||||
|
||||
@app.route("/generate/video", methods=["GET", "POST"])
|
||||
@login_required
|
||||
def generate_video():
|
||||
result = error = None
|
||||
error = None
|
||||
token = session["access_token"]
|
||||
if request.method == "POST":
|
||||
mode = request.form.get("mode", "text")
|
||||
token = session["access_token"]
|
||||
duration_raw = request.form.get("duration_seconds", "")
|
||||
duration = int(
|
||||
duration_raw) if duration_raw.strip().isdigit() else None
|
||||
resolution = request.form.get("resolution", "").strip() or None
|
||||
|
||||
if mode == "image":
|
||||
resp = _api("POST", "/generate/video/from-image", token=token, json={
|
||||
"model": request.form.get("model", "").strip(),
|
||||
@@ -196,11 +437,21 @@ def generate_video():
|
||||
"duration_seconds": duration,
|
||||
"resolution": resolution,
|
||||
})
|
||||
|
||||
if resp.status_code == 200:
|
||||
result = resp.json()
|
||||
# On success, redirect to the detail page to monitor progress
|
||||
db_id = result.get("db_id")
|
||||
if db_id:
|
||||
return redirect(url_for("video_detail", video_id=db_id))
|
||||
# Fallback for older backend versions
|
||||
flash("Video job started.", "success")
|
||||
return redirect(url_for("gallery"))
|
||||
else:
|
||||
error = resp.json().get("detail", "Generation failed.")
|
||||
return render_template("generate_video.html", result=result, error=error)
|
||||
|
||||
models = _load_models(token, "video")
|
||||
return render_template("generate_video.html", error=error, models=models)
|
||||
|
||||
|
||||
@app.get("/generate/video/status")
|
||||
@@ -218,6 +469,24 @@ def generate_video_status():
|
||||
return jsonify(resp.json()), resp.status_code
|
||||
|
||||
|
||||
@app.get("/generate/video/<video_id>/status")
|
||||
@login_required
|
||||
def generate_video_db_status(video_id: str):
|
||||
"""Return current DB status for a video job (polled by frontend JS)."""
|
||||
resp = _api(
|
||||
"GET", f"/generate/videos/{video_id}", token=session["access_token"])
|
||||
return jsonify(resp.json()), resp.status_code
|
||||
|
||||
|
||||
@app.post("/generate/video/<video_id>/cancel")
|
||||
@login_required
|
||||
def cancel_video_job(video_id: str):
|
||||
"""Proxy cancel request to backend."""
|
||||
resp = _api(
|
||||
"POST", f"/generate/videos/{video_id}/cancel", token=session["access_token"])
|
||||
return jsonify(resp.json()), resp.status_code
|
||||
|
||||
|
||||
# ── Admin ─────────────────────────────────────────────────────────────────
|
||||
|
||||
@app.get("/admin")
|
||||
@@ -249,6 +518,46 @@ def admin_delete_user(user_id: str):
|
||||
return redirect(url_for("admin"))
|
||||
|
||||
|
||||
@app.get("/admin/models")
|
||||
@admin_required
|
||||
def admin_models():
|
||||
"""Show model cache status and list all models."""
|
||||
return render_template("admin/models.html")
|
||||
|
||||
|
||||
# ── Admin API proxies (same-origin for browser JS, avoids mixed-content) ──
|
||||
|
||||
@app.get("/api/admin/videos")
|
||||
@admin_required
|
||||
def api_admin_list_videos():
|
||||
resp = _api("GET", "/admin/videos", token=session["access_token"])
|
||||
return jsonify(resp.json()), resp.status_code
|
||||
|
||||
|
||||
@app.post("/api/admin/videos/<job_id>/retry")
|
||||
@admin_required
|
||||
def api_admin_retry_video(job_id: str):
|
||||
resp = _api(
|
||||
"POST", f"/admin/videos/{job_id}/retry", token=session["access_token"])
|
||||
return jsonify(resp.json()), resp.status_code
|
||||
|
||||
|
||||
@app.post("/api/admin/videos/<job_id>/cancel")
|
||||
@admin_required
|
||||
def api_admin_cancel_video(job_id: str):
|
||||
resp = _api(
|
||||
"POST", f"/admin/videos/{job_id}/cancel", token=session["access_token"])
|
||||
return jsonify(resp.json()), resp.status_code
|
||||
|
||||
|
||||
@app.delete("/api/admin/videos/<job_id>")
|
||||
@admin_required
|
||||
def api_admin_delete_video(job_id: str):
|
||||
resp = _api(
|
||||
"DELETE", f"/admin/videos/{job_id}", token=session["access_token"])
|
||||
return jsonify(resp.json()), resp.status_code
|
||||
|
||||
|
||||
# ── Profile ───────────────────────────────────────────────────────────────
|
||||
|
||||
@app.route("/users/profile", methods=["GET", "POST"])
|
||||
|
||||
+100
-10
@@ -18,6 +18,28 @@ document.addEventListener("DOMContentLoaded", () => {
|
||||
});
|
||||
}
|
||||
|
||||
// ── Image upload preview ───────────────────────────────
|
||||
const imageInput = document.getElementById("reference_image");
|
||||
const imagePreviewWrap = document.getElementById("image-upload-preview");
|
||||
const imagePreview = document.getElementById("image-upload-preview-img");
|
||||
const imageFilename = document.getElementById("image-upload-filename");
|
||||
|
||||
if (imageInput && imagePreviewWrap && imagePreview && imageFilename) {
|
||||
imageInput.addEventListener("change", () => {
|
||||
const file = imageInput.files && imageInput.files[0];
|
||||
if (!file) {
|
||||
imagePreviewWrap.hidden = true;
|
||||
imagePreview.removeAttribute("src");
|
||||
imageFilename.textContent = "";
|
||||
return;
|
||||
}
|
||||
|
||||
imagePreview.src = URL.createObjectURL(file);
|
||||
imageFilename.textContent = file.name;
|
||||
imagePreviewWrap.hidden = false;
|
||||
});
|
||||
}
|
||||
|
||||
// ── Generate dropdown tabs ─────────────────────────────
|
||||
document.querySelectorAll(".tab-btn").forEach((btn) => {
|
||||
btn.addEventListener("click", () => {
|
||||
@@ -41,15 +63,75 @@ document.addEventListener("DOMContentLoaded", () => {
|
||||
// ── Video status polling ───────────────────────────────
|
||||
const pollDiv = document.getElementById("video-poll-status");
|
||||
if (pollDiv) {
|
||||
const pollingUrl = pollDiv.dataset.pollingUrl;
|
||||
const videoId = pollDiv.dataset.videoId;
|
||||
const statusText = document.getElementById("poll-status-text");
|
||||
const videoContainer = document.getElementById("poll-video-container");
|
||||
const cancelBtn = document.getElementById("cancel-video-btn");
|
||||
const cancelMsg = document.getElementById("cancel-msg");
|
||||
const MAX_POLLS = 120; // ~10 minutes at 5s interval
|
||||
let pollCount = 0;
|
||||
let interval = null;
|
||||
|
||||
const interval = setInterval(async () => {
|
||||
const stopPolling = () => {
|
||||
if (interval) {
|
||||
clearInterval(interval);
|
||||
interval = null;
|
||||
}
|
||||
};
|
||||
|
||||
if (cancelBtn) {
|
||||
cancelBtn.addEventListener("click", async () => {
|
||||
cancelBtn.disabled = true;
|
||||
cancelBtn.textContent = "Cancelling…";
|
||||
try {
|
||||
const resp = await fetch(
|
||||
"/generate/video/status?polling_url=" +
|
||||
encodeURIComponent(pollingUrl),
|
||||
"/generate/video/" + encodeURIComponent(videoId) + "/cancel",
|
||||
{ method: "POST" },
|
||||
);
|
||||
if (resp.ok) {
|
||||
stopPolling();
|
||||
cancelBtn.classList.add("hidden");
|
||||
if (cancelMsg) {
|
||||
cancelMsg.textContent = "Job cancelled.";
|
||||
cancelMsg.classList.remove("hidden", "text-red-500");
|
||||
cancelMsg.classList.add("text-gray-300");
|
||||
}
|
||||
if (statusText) {
|
||||
statusText.innerHTML = "Status: <strong>cancelled</strong>";
|
||||
}
|
||||
} else {
|
||||
const data = await resp.json().catch(() => ({}));
|
||||
cancelBtn.disabled = false;
|
||||
cancelBtn.textContent = "Cancel Job";
|
||||
if (cancelMsg) {
|
||||
cancelMsg.textContent = data.detail || "Cancel failed.";
|
||||
cancelMsg.classList.remove("hidden");
|
||||
cancelMsg.classList.add("text-red-500");
|
||||
}
|
||||
}
|
||||
} catch (e) {
|
||||
cancelBtn.disabled = false;
|
||||
cancelBtn.textContent = "Cancel Job";
|
||||
if (cancelMsg) {
|
||||
cancelMsg.textContent = "Network error.";
|
||||
cancelMsg.classList.remove("hidden");
|
||||
cancelMsg.classList.add("text-red-500");
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
interval = setInterval(async () => {
|
||||
try {
|
||||
pollCount++;
|
||||
if (pollCount > MAX_POLLS) {
|
||||
stopPolling();
|
||||
pollDiv.innerHTML =
|
||||
'<div class="alert alert-warning">Polling timed out. Please refresh the page to check status.</div>';
|
||||
return;
|
||||
}
|
||||
const resp = await fetch(
|
||||
"/generate/video/" + encodeURIComponent(videoId) + "/status",
|
||||
);
|
||||
if (!resp.ok) return;
|
||||
const data = await resp.json();
|
||||
@@ -59,8 +141,9 @@ document.addEventListener("DOMContentLoaded", () => {
|
||||
}
|
||||
|
||||
if (data.status === "completed") {
|
||||
clearInterval(interval);
|
||||
if (data.video_url && videoContainer) {
|
||||
stopPolling();
|
||||
if (data.video_url) {
|
||||
if (videoContainer) {
|
||||
const vid = document.createElement("video");
|
||||
vid.src = data.video_url;
|
||||
vid.controls = true;
|
||||
@@ -68,13 +151,20 @@ document.addEventListener("DOMContentLoaded", () => {
|
||||
videoContainer.appendChild(vid);
|
||||
const msg = pollDiv.querySelector("p");
|
||||
if (msg) msg.textContent = "Video ready!";
|
||||
} else {
|
||||
// video_detail page: reload to show the video element
|
||||
window.location.reload();
|
||||
}
|
||||
}
|
||||
} else if (data.status === "failed") {
|
||||
clearInterval(interval);
|
||||
stopPolling();
|
||||
pollDiv.innerHTML =
|
||||
'<div class="alert alert-error">Generation failed: ' +
|
||||
(data.error || "Unknown error") +
|
||||
"</div>";
|
||||
'<div class="alert alert-error">Generation failed.</div>';
|
||||
} else if (data.status === "cancelled") {
|
||||
stopPolling();
|
||||
if (cancelBtn) cancelBtn.classList.add("hidden");
|
||||
pollDiv.innerHTML =
|
||||
'<div class="alert alert-info">Job was cancelled.</div>';
|
||||
}
|
||||
} catch (e) {
|
||||
console.error("Video polling error:", e);
|
||||
|
||||
@@ -139,11 +139,15 @@ nav {
|
||||
|
||||
/* ─── Main layout ──────────────────────────────────────── */
|
||||
main {
|
||||
max-width: 800px;
|
||||
max-width: 1200px;
|
||||
margin: 2rem auto;
|
||||
padding: 0 1rem;
|
||||
}
|
||||
|
||||
main:has(.admin-page) {
|
||||
max-width: 1200px;
|
||||
}
|
||||
|
||||
/* ─── Alerts ───────────────────────────────────────────── */
|
||||
.alert {
|
||||
padding: 0.75rem 1rem;
|
||||
@@ -359,6 +363,29 @@ pre {
|
||||
margin-top: 0.5rem;
|
||||
}
|
||||
|
||||
.image-upload-preview {
|
||||
margin-top: 0.75rem;
|
||||
}
|
||||
|
||||
.image-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fill, minmax(160px, 1fr));
|
||||
gap: 1rem;
|
||||
margin-top: 0.75rem;
|
||||
}
|
||||
|
||||
.image-grid-item {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
.image-grid-item .generated-image {
|
||||
width: 100%;
|
||||
aspect-ratio: 1 / 1;
|
||||
object-fit: cover;
|
||||
}
|
||||
|
||||
/* ─── Admin table ──────────────────────────────────────── */
|
||||
.stats-grid {
|
||||
display: grid;
|
||||
@@ -592,7 +619,7 @@ main {
|
||||
|
||||
/* Card */
|
||||
.card {
|
||||
background: #fff;
|
||||
background: rgba(255, 255, 255, 0.08);
|
||||
border-radius: 10px;
|
||||
padding: 2rem;
|
||||
box-shadow: 0 1px 4px rgba(0, 0, 0, 0.08);
|
||||
@@ -672,3 +699,123 @@ pre {
|
||||
border-radius: 8px;
|
||||
margin-top: 0.5rem;
|
||||
}
|
||||
|
||||
/* ─── Chat interface ─────────────────────────────────────────────────────── */
|
||||
.chat-page {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
height: calc(100vh - 100px);
|
||||
max-height: 900px;
|
||||
}
|
||||
|
||||
.chat-header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
margin-bottom: 0.75rem;
|
||||
}
|
||||
|
||||
.chat-config {
|
||||
border: 1px solid var(--border, #ddd);
|
||||
border-radius: 6px;
|
||||
padding: 0.5rem 0.75rem;
|
||||
margin-bottom: 0.75rem;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.chat-config summary {
|
||||
cursor: pointer;
|
||||
font-weight: 500;
|
||||
user-select: none;
|
||||
}
|
||||
|
||||
.chat-config-body {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 0.4rem;
|
||||
margin-top: 0.5rem;
|
||||
}
|
||||
|
||||
.chat-history {
|
||||
flex: 1;
|
||||
overflow-y: auto;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 0.75rem;
|
||||
padding: 0.5rem 0;
|
||||
border-top: 1px solid var(--border, #ddd);
|
||||
border-bottom: 1px solid var(--border, #ddd);
|
||||
margin-bottom: 0.75rem;
|
||||
}
|
||||
|
||||
.chat-empty {
|
||||
color: var(--text-muted, #888);
|
||||
text-align: center;
|
||||
margin: auto;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.chat-bubble {
|
||||
max-width: 80%;
|
||||
padding: 0.6rem 0.9rem;
|
||||
border-radius: 12px;
|
||||
font-size: 0.9rem;
|
||||
line-height: 1.5;
|
||||
}
|
||||
|
||||
.chat-bubble--user {
|
||||
align-self: flex-end;
|
||||
background: var(--accent, #7c6ff7);
|
||||
color: #fff;
|
||||
border-bottom-right-radius: 3px;
|
||||
}
|
||||
|
||||
.chat-bubble--assistant {
|
||||
align-self: flex-start;
|
||||
background: var(--surface-2, #f0f0f0);
|
||||
color: var(--text, #222);
|
||||
border-bottom-left-radius: 3px;
|
||||
}
|
||||
|
||||
.bubble-role {
|
||||
display: block;
|
||||
font-size: 0.7rem;
|
||||
font-weight: 600;
|
||||
text-transform: uppercase;
|
||||
opacity: 0.7;
|
||||
margin-bottom: 0.25rem;
|
||||
}
|
||||
|
||||
.bubble-content {
|
||||
white-space: pre-wrap;
|
||||
word-break: break-word;
|
||||
}
|
||||
|
||||
.bubble-meta {
|
||||
display: block;
|
||||
font-size: 0.7rem;
|
||||
opacity: 0.6;
|
||||
margin-top: 0.3rem;
|
||||
text-align: right;
|
||||
}
|
||||
|
||||
.chat-input-row {
|
||||
display: flex;
|
||||
gap: 0.5rem;
|
||||
align-items: flex-end;
|
||||
}
|
||||
|
||||
.chat-input-textarea {
|
||||
flex: 1;
|
||||
resize: none;
|
||||
border-radius: 8px;
|
||||
padding: 0.5rem 0.75rem;
|
||||
font-size: 0.95rem;
|
||||
min-height: 2.5rem;
|
||||
max-height: 8rem;
|
||||
}
|
||||
|
||||
.btn-sm {
|
||||
padding: 0.3rem 0.7rem;
|
||||
font-size: 0.8rem;
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{% extends "base.html" %} {% block title %}Admin — AI Allucanget{% endblock %}
|
||||
{% block content %}
|
||||
<div class="card">
|
||||
{% extends "base.html" %} {% block title %}Admin — All You Can GET AI{% endblock
|
||||
%} {% block content %}
|
||||
<div class="card admin-page">
|
||||
<h1>Admin Dashboard</h1>
|
||||
|
||||
{% if stats %}
|
||||
@@ -76,5 +76,204 @@
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
|
||||
<!-- ── Video Jobs ──────────────────────────────────────────────── -->
|
||||
<h2 class="section-title" style="margin-top: 2rem">Video Jobs</h2>
|
||||
|
||||
<div
|
||||
style="
|
||||
display: flex;
|
||||
gap: 1rem;
|
||||
align-items: center;
|
||||
flex-wrap: wrap;
|
||||
margin-bottom: 1rem;
|
||||
"
|
||||
>
|
||||
<label for="vj-status-filter" style="font-weight: 600"
|
||||
>Filter by status:</label
|
||||
>
|
||||
<select id="vj-status-filter" class="form-control" style="width: auto">
|
||||
<option value="">All</option>
|
||||
<option value="queued">Queued</option>
|
||||
<option value="processing">Processing</option>
|
||||
<option value="completed">Completed</option>
|
||||
<option value="failed">Failed</option>
|
||||
<option value="cancelled">Cancelled</option>
|
||||
</select>
|
||||
<label for="vj-sort" style="font-weight: 600">Sort:</label>
|
||||
<select id="vj-sort" class="form-control" style="width: auto">
|
||||
<option value="created_desc">Created (newest first)</option>
|
||||
<option value="created_asc">Created (oldest first)</option>
|
||||
<option value="updated_desc">Updated (newest first)</option>
|
||||
<option value="status_asc">Status (A–Z)</option>
|
||||
<option value="model_asc">Model (A–Z)</option>
|
||||
</select>
|
||||
<button id="vj-refresh" class="btn btn-sm">Refresh</button>
|
||||
<span
|
||||
id="vj-count"
|
||||
style="color: var(--text-muted, #888); font-size: 0.9em"
|
||||
></span>
|
||||
</div>
|
||||
|
||||
<div class="table-wrap">
|
||||
<table id="vj-table">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>User</th>
|
||||
<th>Status</th>
|
||||
<th>Model</th>
|
||||
<th>Prompt</th>
|
||||
<th>Created</th>
|
||||
<th>Updated</th>
|
||||
<th>Actions</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody id="vj-tbody">
|
||||
<tr>
|
||||
<td colspan="7" class="text-muted">Loading…</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
(function () {
|
||||
let allJobs = [];
|
||||
|
||||
async function loadJobs() {
|
||||
document.getElementById("vj-tbody").innerHTML =
|
||||
'<tr><td colspan="7" class="text-muted">Loading…</td></tr>';
|
||||
try {
|
||||
const r = await fetch("/api/admin/videos");
|
||||
if (!r.ok) throw new Error(await r.text());
|
||||
allJobs = await r.json();
|
||||
renderJobs();
|
||||
} catch (e) {
|
||||
document.getElementById("vj-tbody").innerHTML =
|
||||
`<tr><td colspan="7" style="color:red;">Error: ${e.message}</td></tr>`;
|
||||
}
|
||||
}
|
||||
|
||||
function renderJobs() {
|
||||
const statusFilter = document.getElementById("vj-status-filter").value;
|
||||
const sort = document.getElementById("vj-sort").value;
|
||||
|
||||
let jobs = statusFilter
|
||||
? allJobs.filter((j) => j.status === statusFilter)
|
||||
: [...allJobs];
|
||||
|
||||
jobs.sort((a, b) => {
|
||||
if (sort === "created_asc")
|
||||
return new Date(a.created_at) - new Date(b.created_at);
|
||||
if (sort === "updated_desc")
|
||||
return new Date(b.updated_at) - new Date(a.updated_at);
|
||||
if (sort === "status_asc") return a.status.localeCompare(b.status);
|
||||
if (sort === "model_asc") return a.model_id.localeCompare(b.model_id);
|
||||
return new Date(b.created_at) - new Date(a.created_at); // created_desc default
|
||||
});
|
||||
|
||||
document.getElementById("vj-count").textContent =
|
||||
`${jobs.length} job${jobs.length !== 1 ? "s" : ""}`;
|
||||
|
||||
const tbody = document.getElementById("vj-tbody");
|
||||
if (jobs.length === 0) {
|
||||
tbody.innerHTML =
|
||||
'<tr><td colspan="7" class="text-muted">No jobs found.</td></tr>';
|
||||
return;
|
||||
}
|
||||
|
||||
const statusColor = {
|
||||
completed: "color:var(--success-color,#4caf50)",
|
||||
failed: "color:var(--danger-color,#e53935)",
|
||||
cancelled: "color:var(--danger-color,#e53935)",
|
||||
processing: "color:var(--warning-color,#fb8c00)",
|
||||
queued: "color:var(--warning-color,#fb8c00)",
|
||||
};
|
||||
|
||||
tbody.innerHTML = jobs
|
||||
.map((job) => {
|
||||
const sc = statusColor[job.status] || "";
|
||||
const canRetry =
|
||||
job.status === "failed" || job.status === "cancelled";
|
||||
const canCancel =
|
||||
job.status === "queued" || job.status === "processing";
|
||||
const actions = [
|
||||
canRetry
|
||||
? `<button class="btn btn-sm vj-retry" data-id="${job.id}">Retry</button>`
|
||||
: "",
|
||||
canCancel
|
||||
? `<button class="btn btn-sm vj-cancel" data-id="${job.id}">Cancel</button>`
|
||||
: "",
|
||||
`<button class="btn btn-sm btn-danger vj-delete" data-id="${job.id}">Delete</button>`,
|
||||
].join(" ");
|
||||
const prompt =
|
||||
job.prompt.length > 60 ? job.prompt.slice(0, 57) + "…" : job.prompt;
|
||||
const created = job.created_at
|
||||
? new Date(job.created_at).toLocaleString()
|
||||
: "—";
|
||||
const updated = job.updated_at
|
||||
? new Date(job.updated_at).toLocaleString()
|
||||
: "—";
|
||||
return `<tr>
|
||||
<td>${job.user_email || "—"}</td>
|
||||
<td style="${sc};font-weight:600;">${job.status}</td>
|
||||
<td style="font-size:.85em;">${job.model_id}</td>
|
||||
<td title="${job.prompt.replace(/"/g, """)}">${prompt}</td>
|
||||
<td style="white-space:nowrap;">${created}</td>
|
||||
<td style="white-space:nowrap;">${updated}</td>
|
||||
<td style="white-space:nowrap;">${actions}</td>
|
||||
</tr>`;
|
||||
})
|
||||
.join("");
|
||||
}
|
||||
|
||||
async function apiPost(path) {
|
||||
const r = await fetch(path, { method: "POST" });
|
||||
if (!r.ok) {
|
||||
const d = await r.json().catch(() => ({}));
|
||||
throw new Error(d.detail || r.statusText);
|
||||
}
|
||||
return r.json();
|
||||
}
|
||||
|
||||
async function apiDelete(path) {
|
||||
const r = await fetch(path, { method: "DELETE" });
|
||||
if (!r.ok) {
|
||||
const d = await r.json().catch(() => ({}));
|
||||
throw new Error(d.detail || r.statusText);
|
||||
}
|
||||
return r.json();
|
||||
}
|
||||
|
||||
document
|
||||
.getElementById("vj-tbody")
|
||||
.addEventListener("click", async function (e) {
|
||||
const btn = e.target.closest("button");
|
||||
if (!btn) return;
|
||||
const id = btn.dataset.id;
|
||||
try {
|
||||
if (btn.classList.contains("vj-retry"))
|
||||
await apiPost(`/api/admin/videos/${id}/retry`);
|
||||
if (btn.classList.contains("vj-cancel"))
|
||||
await apiPost(`/api/admin/videos/${id}/cancel`);
|
||||
if (btn.classList.contains("vj-delete")) {
|
||||
if (!confirm("Permanently delete this video job?")) return;
|
||||
await apiDelete(`/api/admin/videos/${id}`);
|
||||
}
|
||||
await loadJobs();
|
||||
} catch (err) {
|
||||
alert("Error: " + err.message);
|
||||
}
|
||||
});
|
||||
|
||||
document
|
||||
.getElementById("vj-status-filter")
|
||||
.addEventListener("change", renderJobs);
|
||||
document.getElementById("vj-sort").addEventListener("change", renderJobs);
|
||||
document.getElementById("vj-refresh").addEventListener("click", loadJobs);
|
||||
|
||||
loadJobs();
|
||||
})();
|
||||
</script>
|
||||
{% endblock %}
|
||||
|
||||
@@ -0,0 +1,154 @@
|
||||
{% extends "base.html" %} {% block title %}Admin - Model Management{% endblock
|
||||
%} {% block content %}
|
||||
<div class="container mx-auto px-4 py-8">
|
||||
<h1 class="text-3xl font-bold mb-6">Admin: Model Management</h1>
|
||||
|
||||
<!-- Cache Status -->
|
||||
<div class="bg-gray-800 p-4 rounded-lg shadow-md mb-6">
|
||||
<h2 class="text-xl font-semibold mb-2">Cache Status</h2>
|
||||
<div id="cache-status" class="grid grid-cols-2 gap-4">
|
||||
<p>
|
||||
<strong>Last Updated:</strong> <span id="last-updated">Loading...</span>
|
||||
</p>
|
||||
<p>
|
||||
<strong>Model Count:</strong> <span id="model-count">Loading...</span>
|
||||
</p>
|
||||
</div>
|
||||
<button
|
||||
id="refresh-button"
|
||||
class="mt-4 bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded"
|
||||
>
|
||||
Refresh Cache
|
||||
</button>
|
||||
<p id="refresh-status" class="mt-2 text-sm"></p>
|
||||
</div>
|
||||
|
||||
<!-- Model List -->
|
||||
<div class="bg-gray-800 p-4 rounded-lg shadow-md">
|
||||
<h2 class="text-xl font-semibold mb-2">Available Models</h2>
|
||||
<table id="models-table" class="min-w-full divide-y divide-gray-700">
|
||||
<thead class="bg-gray-700">
|
||||
<tr>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-6 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
Name
|
||||
</th>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-6 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
ID
|
||||
</th>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-6 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
Modality
|
||||
</th>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-6 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
Context Length
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody
|
||||
id="models-table-body"
|
||||
class="bg-gray-800 divide-y divide-gray-700"
|
||||
>
|
||||
<!-- Data will be populated by JavaScript -->
|
||||
<tr>
|
||||
<td colspan="4" class="text-center py-4">Loading models...</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
document.addEventListener("DOMContentLoaded", function () {
|
||||
const lastUpdatedEl = document.getElementById("last-updated");
|
||||
const modelCountEl = document.getElementById("model-count");
|
||||
const modelsTableBody = document.getElementById("models-table-body");
|
||||
const refreshButton = document.getElementById("refresh-button");
|
||||
const refreshStatus = document.getElementById("refresh-status");
|
||||
|
||||
async function fetchCacheStatus() {
|
||||
try {
|
||||
const response = await fetch("/api/v1/admin/models/status");
|
||||
if (!response.ok) throw new Error("Failed to fetch status");
|
||||
const data = await response.json();
|
||||
lastUpdatedEl.textContent = data.last_updated
|
||||
? new Date(data.last_updated).toLocaleString()
|
||||
: "Never";
|
||||
modelCountEl.textContent = data.model_count;
|
||||
} catch (error) {
|
||||
lastUpdatedEl.textContent = "Error";
|
||||
modelCountEl.textContent = "Error";
|
||||
console.error("Error fetching cache status:", error);
|
||||
}
|
||||
}
|
||||
|
||||
async function fetchModels() {
|
||||
try {
|
||||
const response = await fetch("/api/v1/admin/models");
|
||||
if (!response.ok) throw new Error("Failed to fetch models");
|
||||
const models = await response.json();
|
||||
modelsTableBody.innerHTML = ""; // Clear loading message
|
||||
if (models.length === 0) {
|
||||
modelsTableBody.innerHTML =
|
||||
'<tr><td colspan="4" class="text-center py-4">No models found in cache.</td></tr>';
|
||||
} else {
|
||||
models.forEach((model) => {
|
||||
const row = `
|
||||
<tr>
|
||||
<td class="px-6 py-4 whitespace-nowrap">${model.name}</td>
|
||||
<td class="px-6 py-4 whitespace-nowrap font-mono text-sm">${model.id}</td>
|
||||
<td class="px-6 py-4 whitespace-nowrap">${model.modality}</td>
|
||||
<td class="px-6 py-4 whitespace-nowrap">${model.context_length || "N/A"}</td>
|
||||
</tr>
|
||||
`;
|
||||
modelsTableBody.innerHTML += row;
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
modelsTableBody.innerHTML =
|
||||
'<tr><td colspan="4" class="text-center py-4 text-red-500">Error loading models.</td></tr>';
|
||||
console.error("Error fetching models:", error);
|
||||
}
|
||||
}
|
||||
|
||||
async function refreshCache() {
|
||||
refreshButton.disabled = true;
|
||||
refreshStatus.textContent = "Refreshing...";
|
||||
refreshStatus.classList.remove("text-red-500", "text-green-500");
|
||||
|
||||
try {
|
||||
const response = await fetch("/api/v1/admin/models/refresh", {
|
||||
method: "POST",
|
||||
});
|
||||
const data = await response.json();
|
||||
if (!response.ok) {
|
||||
throw new Error(data.detail || "Failed to refresh cache");
|
||||
}
|
||||
refreshStatus.textContent = `Successfully refreshed ${data.refreshed} models. Total: ${data.total_models}.`;
|
||||
refreshStatus.classList.add("text-green-500");
|
||||
fetchCacheStatus();
|
||||
fetchModels();
|
||||
} catch (error) {
|
||||
refreshStatus.textContent = `Error: ${error.message}`;
|
||||
refreshStatus.classList.add("text-red-500");
|
||||
} finally {
|
||||
refreshButton.disabled = false;
|
||||
}
|
||||
}
|
||||
|
||||
fetchCacheStatus();
|
||||
fetchModels();
|
||||
refreshButton.addEventListener("click", refreshCache);
|
||||
});
|
||||
</script>
|
||||
{% endblock %}
|
||||
@@ -0,0 +1,182 @@
|
||||
{% extends "base.html" %} {% block title %}Admin - Video Jobs{% endblock %} {%
|
||||
block content %}
|
||||
<div class="container mx-auto px-4 py-8">
|
||||
<h1 class="text-3xl font-bold mb-6">Admin: Video Jobs</h1>
|
||||
|
||||
<!-- Purge Old Jobs -->
|
||||
<div class="bg-gray-800 p-4 rounded-lg shadow-md mb-6">
|
||||
<h2 class="text-xl font-semibold mb-2">Maintenance</h2>
|
||||
<p class="text-gray-400 mb-4">
|
||||
Delete all completed, failed, or cancelled jobs older than 30 days.
|
||||
</p>
|
||||
<button
|
||||
id="purge-button"
|
||||
class="bg-red-500 hover:bg-red-700 text-white font-bold py-2 px-4 rounded"
|
||||
>
|
||||
Purge Old Jobs
|
||||
</button>
|
||||
<p id="purge-status" class="mt-2 text-sm"></p>
|
||||
</div>
|
||||
|
||||
<!-- Video Jobs Table -->
|
||||
<div class="bg-gray-800 p-4 rounded-lg shadow-md overflow-x-auto">
|
||||
<table class="min-w-full divide-y divide-gray-700">
|
||||
<thead class="bg-gray-700">
|
||||
<tr>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-4 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
User
|
||||
</th>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-4 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
Status
|
||||
</th>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-4 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
Model
|
||||
</th>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-4 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
Prompt
|
||||
</th>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-4 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
Created
|
||||
</th>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-4 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
|
||||
>
|
||||
Actions
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody id="jobs-table-body" class="bg-gray-800 divide-y divide-gray-700">
|
||||
<tr>
|
||||
<td colspan="6" class="text-center py-4">Loading jobs...</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
document.addEventListener("DOMContentLoaded", function () {
|
||||
const jobsTableBody = document.getElementById("jobs-table-body");
|
||||
const purgeButton = document.getElementById("purge-button");
|
||||
const purgeStatus = document.getElementById("purge-status");
|
||||
|
||||
async function fetchJobs() {
|
||||
try {
|
||||
const response = await fetch(
|
||||
"{{ config['BACKEND_URL'] }}/admin/videos",
|
||||
{
|
||||
headers: {
|
||||
Authorization: "Bearer {{ session['access_token'] }}",
|
||||
},
|
||||
},
|
||||
);
|
||||
if (!response.ok) throw new Error("Failed to fetch jobs");
|
||||
const jobs = await response.json();
|
||||
jobsTableBody.innerHTML = "";
|
||||
if (jobs.length === 0) {
|
||||
jobsTableBody.innerHTML =
|
||||
'<tr><td colspan="6" class="text-center py-4">No video jobs found.</td></tr>';
|
||||
} else {
|
||||
jobs.forEach((job) => {
|
||||
const statusClass =
|
||||
job.status === "completed"
|
||||
? "text-green-400"
|
||||
: job.status === "failed" || job.status === "cancelled"
|
||||
? "text-red-400"
|
||||
: "text-yellow-400";
|
||||
const cancelBtn =
|
||||
job.status === "queued" || job.status === "processing"
|
||||
? `<button class="cancel-btn text-red-400 hover:text-red-600 text-sm" data-job-id="${job.id}">Cancel</button>`
|
||||
: "";
|
||||
const row = `
|
||||
<tr>
|
||||
<td class="px-4 py-3 whitespace-nowrap text-sm">${job.user_email || "Unknown"}</td>
|
||||
<td class="px-4 py-3 whitespace-nowrap text-sm font-semibold ${statusClass}">${job.status}</td>
|
||||
<td class="px-4 py-3 whitespace-nowrap text-sm">${job.model_id}</td>
|
||||
<td class="px-4 py-3 text-sm truncate max-w-xs">${job.prompt}</td>
|
||||
<td class="px-4 py-3 whitespace-nowrap text-sm">${new Date(job.created_at).toLocaleString()}</td>
|
||||
<td class="px-4 py-3 whitespace-nowrap text-sm">${cancelBtn}</td>
|
||||
</tr>
|
||||
`;
|
||||
jobsTableBody.innerHTML += row;
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
jobsTableBody.innerHTML =
|
||||
'<tr><td colspan="6" class="text-center py-4 text-red-500">Error loading jobs.</td></tr>';
|
||||
console.error("Error fetching jobs:", error);
|
||||
}
|
||||
}
|
||||
|
||||
async function purgeJobs() {
|
||||
purgeButton.disabled = true;
|
||||
purgeStatus.textContent = "Purging...";
|
||||
purgeStatus.classList.remove("text-red-500", "text-green-500");
|
||||
|
||||
try {
|
||||
const response = await fetch(
|
||||
"{{ config['BACKEND_URL'] }}/admin/videos/purge",
|
||||
{
|
||||
method: "POST",
|
||||
headers: {
|
||||
Authorization: "Bearer {{ session['access_token'] }}",
|
||||
},
|
||||
},
|
||||
);
|
||||
const data = await response.json();
|
||||
if (!response.ok)
|
||||
throw new Error(data.detail || "Failed to purge jobs");
|
||||
purgeStatus.textContent = `Purged ${data.deleted} jobs. ${data.remaining} remaining.`;
|
||||
purgeStatus.classList.add("text-green-500");
|
||||
fetchJobs();
|
||||
} catch (error) {
|
||||
purgeStatus.textContent = `Error: ${error.message}`;
|
||||
purgeStatus.classList.add("text-red-500");
|
||||
} finally {
|
||||
purgeButton.disabled = false;
|
||||
}
|
||||
}
|
||||
|
||||
// Cancel button event delegation
|
||||
jobsTableBody.addEventListener("click", async function (e) {
|
||||
if (e.target.classList.contains("cancel-btn")) {
|
||||
const jobId = e.target.dataset.jobId;
|
||||
try {
|
||||
const response = await fetch(
|
||||
`{{ config['BACKEND_URL'] }}/admin/videos/${jobId}/cancel`,
|
||||
{
|
||||
method: "POST",
|
||||
headers: {
|
||||
Authorization: "Bearer {{ session['access_token'] }}",
|
||||
},
|
||||
},
|
||||
);
|
||||
if (!response.ok) throw new Error("Failed to cancel job");
|
||||
fetchJobs();
|
||||
} catch (error) {
|
||||
alert(`Error: ${error.message}`);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
purgeButton.addEventListener("click", purgeJobs);
|
||||
fetchJobs();
|
||||
});
|
||||
</script>
|
||||
{% endblock %}
|
||||
@@ -3,16 +3,17 @@
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>{% block title %}AI Allucanget{% endblock %}</title>
|
||||
<title>{% block title %}All You Can GET AI{% endblock %}</title>
|
||||
<link
|
||||
rel="stylesheet"
|
||||
href="{{ url_for('static', filename='style.css') }}"
|
||||
/>
|
||||
<script src="https://cdn.tailwindcss.com"></script>
|
||||
</head>
|
||||
<body>
|
||||
<header>
|
||||
<nav>
|
||||
<a href="{{ url_for('index') }}" class="brand">AI Allucanget</a>
|
||||
<a href="{{ url_for('index') }}" class="brand">All You Can GET AI</a>
|
||||
|
||||
<button class="hamburger" aria-label="Open menu">
|
||||
<span></span><span></span><span></span>
|
||||
@@ -21,15 +22,11 @@
|
||||
<div class="nav-links">
|
||||
{% if session.get('access_token') %}
|
||||
<a href="{{ url_for('dashboard') }}">Dashboard</a>
|
||||
<a href="{{ url_for('gallery') }}">Gallery</a>
|
||||
|
||||
<div class="nav-dropdown">
|
||||
<a href="{{ url_for('generate_text') }}">Generate ▾</a>
|
||||
<div class="nav-dropdown-menu">
|
||||
<a href="{{ url_for('generate_text') }}">Text</a>
|
||||
<a href="{{ url_for('generate_image') }}">Image</a>
|
||||
<a href="{{ url_for('generate_video') }}">Video</a>
|
||||
</div>
|
||||
</div>
|
||||
<a href="{{ url_for('generate_text') }}">Generate Text</a>
|
||||
<a href="{{ url_for('generate_image') }}">Generate Image</a>
|
||||
<a href="{{ url_for('generate_video') }}">Generate Video</a>
|
||||
|
||||
<a href="{{ url_for('profile') }}">Profile</a>
|
||||
{% if session.get('user_role') == 'admin' %}
|
||||
|
||||
@@ -1,9 +1,116 @@
|
||||
{% extends "base.html" %}
|
||||
{% block title %}Dashboard — AI Allucanget{% endblock %}
|
||||
{% block content %}
|
||||
{% extends "base.html" %} {% block title %}Dashboard — All You Can GET AI{%
|
||||
endblock %} {% block content %}
|
||||
<div class="card">
|
||||
<h1>Welcome{% if user.get('email') %}, {{ user.email }}{% endif %}</h1>
|
||||
<p>Role: <strong>{{ user.get('role', 'user') }}</strong></p>
|
||||
<a href="{{ url_for('generate') }}" class="btn">Start generating</a>
|
||||
</div>
|
||||
{% endblock %}
|
||||
|
||||
{% if pending_videos %}
|
||||
<div class="card mt-2">
|
||||
<h2>Pending Video Jobs</h2>
|
||||
<div class="image-grid">
|
||||
{% for vid in pending_videos %}
|
||||
<a
|
||||
href="{{ url_for('video_detail', video_id=vid.id) }}"
|
||||
class="image-grid-item"
|
||||
>
|
||||
<div
|
||||
style="
|
||||
background: #1a1a1a;
|
||||
border-radius: 6px;
|
||||
padding: 2rem;
|
||||
text-align: center;
|
||||
"
|
||||
>
|
||||
<span class="text-muted">{{ vid.status | capitalize }} …</span>
|
||||
</div>
|
||||
<p class="text-muted" style="font-size: 0.75rem; margin-top: 0.25rem">
|
||||
<strong>{{ vid.model_id }}</strong><br />{{ vid.prompt[:80] }}{% if
|
||||
vid.prompt|length > 80 %}…{% endif %}
|
||||
</p>
|
||||
</a>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</div>
|
||||
{% endif %} {% if generated_images %}
|
||||
<div class="card mt-2">
|
||||
<h2>Generated images</h2>
|
||||
<div class="image-grid">
|
||||
{% for img in generated_images %}
|
||||
<a
|
||||
href="{{ url_for('image_detail', image_id=img.id) }}"
|
||||
class="image-grid-item"
|
||||
>
|
||||
<img
|
||||
src="{{ img.image_data }}"
|
||||
alt="{{ img.prompt }}"
|
||||
class="generated-image"
|
||||
loading="lazy"
|
||||
/>
|
||||
<p class="text-muted" style="font-size: 0.75rem; margin-top: 0.25rem">
|
||||
<strong>{{ img.model_id }}</strong><br />{{ img.prompt[:80] }}{% if
|
||||
img.prompt|length > 80 %}…{% endif %}
|
||||
</p>
|
||||
</a>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</div>
|
||||
{% endif %} {% if completed_videos %}
|
||||
<div class="card mt-2">
|
||||
<h2>Generated videos</h2>
|
||||
<div class="image-grid">
|
||||
{% for vid in completed_videos %}
|
||||
<a
|
||||
href="{{ url_for('video_detail', video_id=vid.id) }}"
|
||||
class="image-grid-item"
|
||||
>
|
||||
{% if vid.video_url %}
|
||||
<video controls style="max-width: 100%; border-radius: 6px">
|
||||
<source src="{{ vid.video_url }}" />
|
||||
Your browser does not support the video tag.
|
||||
</video>
|
||||
{% else %}
|
||||
<div
|
||||
style="
|
||||
background: #1a1a1a;
|
||||
border-radius: 6px;
|
||||
padding: 2rem;
|
||||
text-align: center;
|
||||
"
|
||||
>
|
||||
<span class="text-muted">{{ vid.status | capitalize }} …</span>
|
||||
</div>
|
||||
{% endif %}
|
||||
<p class="text-muted" style="font-size: 0.75rem; margin-top: 0.25rem">
|
||||
<strong>{{ vid.model_id }}</strong><br />{{ vid.prompt[:80] }}{% if
|
||||
vid.prompt|length > 80 %}…{% endif %}<br />
|
||||
<em>{{ vid.status }}</em>
|
||||
</p>
|
||||
</a>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</div>
|
||||
{% endif %} {% if images %}
|
||||
<div class="card mt-2">
|
||||
<h2>Uploaded reference images</h2>
|
||||
<div class="image-grid">
|
||||
{% for img in images %}
|
||||
<a
|
||||
href="{{ url_for('upload_detail', image_id=img.id) }}"
|
||||
class="image-grid-item"
|
||||
>
|
||||
<img
|
||||
src="{{ url_for('serve_uploaded_image', image_id=img.id) }}"
|
||||
alt="{{ img.filename }}"
|
||||
class="generated-image"
|
||||
loading="lazy"
|
||||
/>
|
||||
<p class="text-muted" style="font-size: 0.75rem; margin-top: 0.25rem">
|
||||
{{ img.filename }} — {{ (img.size_bytes / 1024) | round(1) }} KB
|
||||
</p>
|
||||
</a>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</div>
|
||||
{% endif %} {% endblock %}
|
||||
|
||||
@@ -0,0 +1,300 @@
|
||||
{% extends "base.html" %} {% block title %}My Gallery{% endblock %} {% block
|
||||
content %}
|
||||
<div
|
||||
class="container mx-auto px-4 py-8"
|
||||
data-current-page="1"
|
||||
data-per-page="12"
|
||||
>
|
||||
<div class="container mx-auto px-4 py-8">
|
||||
<h1 class="text-3xl font-bold mb-6">My Gallery</h1>
|
||||
|
||||
<!-- Pending Creations -->
|
||||
{% if pending_videos %}
|
||||
<div class="mb-12">
|
||||
<h2 class="text-2xl font-semibold mb-4 border-b border-gray-700 pb-2">
|
||||
Pending Creations
|
||||
</h2>
|
||||
<div
|
||||
class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4 gap-6"
|
||||
>
|
||||
{% for video in pending_videos %}
|
||||
<div
|
||||
class="block bg-gray-800 rounded-lg shadow-lg overflow-hidden hover:shadow-2xl transition-shadow duration-300 relative"
|
||||
data-pending-video-id="{{ video.id }}"
|
||||
>
|
||||
<a href="{{ url_for('video_detail', video_id=video.id) }}">
|
||||
<div class="p-4">
|
||||
<p class="font-bold text-lg truncate">{{ video.prompt }}</p>
|
||||
<p class="text-sm text-gray-400">
|
||||
Video Job Status:
|
||||
<span class="font-semibold text-yellow-400"
|
||||
>{{ video.status }}</span
|
||||
>
|
||||
</p>
|
||||
<p class="text-xs text-gray-500 mt-2">
|
||||
Started: {{ video.created_at | fromisoformat | humantime }}
|
||||
</p>
|
||||
</div>
|
||||
</a>
|
||||
<div class="px-4 pb-4">
|
||||
<button
|
||||
class="cancel-pending-btn px-3 py-1 bg-red-600 hover:bg-red-700 text-white rounded text-xs"
|
||||
data-video-id="{{ video.id }}"
|
||||
>
|
||||
Cancel
|
||||
</button>
|
||||
<span class="cancel-pending-msg text-xs ml-2 hidden"></span>
|
||||
</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</div>
|
||||
{% endif %}
|
||||
|
||||
<!-- Generated Images -->
|
||||
<div class="mb-12">
|
||||
<h2 class="text-2xl font-semibold mb-4 border-b border-gray-700 pb-2">
|
||||
Generated Images
|
||||
</h2>
|
||||
{% if generated_images %}
|
||||
<div
|
||||
class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4 gap-6"
|
||||
>
|
||||
{% for image in generated_images %}
|
||||
<a
|
||||
href="{{ url_for('image_detail', image_id=image.id) }}"
|
||||
class="block bg-gray-800 rounded-lg shadow-lg overflow-hidden hover:shadow-2xl transition-shadow duration-300"
|
||||
>
|
||||
<img
|
||||
src="{{ image.image_data }}"
|
||||
alt="{{ image.prompt }}"
|
||||
class="w-full h-48 object-cover"
|
||||
/>
|
||||
<div class="p-4">
|
||||
<p class="font-bold text-sm truncate">{{ image.prompt }}</p>
|
||||
<p class="text-xs text-gray-400 mt-1">
|
||||
Image ID: {{ image.id[:8] }}...
|
||||
</p>
|
||||
<p class="text-xs text-gray-500 mt-1">
|
||||
{{ image.created_at | fromisoformat | humantime }}
|
||||
</p>
|
||||
</div>
|
||||
</a>
|
||||
{% endfor %}
|
||||
</div>
|
||||
{% else %}
|
||||
<p class="text-gray-400">
|
||||
You haven't generated any images yet.
|
||||
<a
|
||||
href="{{ url_for('generate_image') }}"
|
||||
class="text-blue-400 hover:underline"
|
||||
>Generate one now</a
|
||||
>.
|
||||
</p>
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
<!-- Generated Videos -->
|
||||
<div class="mb-12">
|
||||
<h2 class="text-2xl font-semibold mb-4 border-b border-gray-700 pb-2">
|
||||
Generated Videos
|
||||
</h2>
|
||||
{% if completed_videos %}
|
||||
<div
|
||||
class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4 gap-6"
|
||||
>
|
||||
{% for video in completed_videos %}
|
||||
<a
|
||||
href="{{ url_for('video_detail', video_id=video.id) }}"
|
||||
class="block bg-gray-800 rounded-lg shadow-lg overflow-hidden hover:shadow-2xl transition-shadow duration-300"
|
||||
>
|
||||
{% if video.video_url %}
|
||||
<img
|
||||
src="{{ video.video_url }}#t=0.1"
|
||||
alt="{{ video.prompt }}"
|
||||
class="w-full h-48 object-cover"
|
||||
/>
|
||||
{% else %}
|
||||
<div class="w-full h-48 bg-black flex items-center justify-center">
|
||||
<svg
|
||||
class="w-12 h-12 text-gray-500"
|
||||
fill="none"
|
||||
stroke="currentColor"
|
||||
viewBox="0 0 24 24"
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
>
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
stroke-width="2"
|
||||
d="M14.752 11.168l-3.197-2.132A1 1 0 0010 9.87v4.263a1 1 0 001.555.832l3.197-2.132a1 1 0 000-1.664z"
|
||||
></path>
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
stroke-width="2"
|
||||
d="M21 12a9 9 0 11-18 0 9 9 0 0118 0z"
|
||||
></path>
|
||||
</svg>
|
||||
</div>
|
||||
{% endif %}
|
||||
<div class="p-4">
|
||||
<p class="font-bold text-sm truncate">{{ video.prompt }}</p>
|
||||
<p class="text-xs text-gray-400 mt-1">
|
||||
Video ID: {{ video.id[:8] }}...
|
||||
</p>
|
||||
<p class="text-xs text-gray-500 mt-1">
|
||||
{{ video.created_at | fromisoformat | humantime }}
|
||||
</p>
|
||||
</div>
|
||||
</a>
|
||||
{% endfor %}
|
||||
</div>
|
||||
{% else %}
|
||||
<p class="text-gray-400">
|
||||
You haven't generated any videos yet.
|
||||
<a
|
||||
href="{{ url_for('generate_video') }}"
|
||||
class="text-blue-400 hover:underline"
|
||||
>Generate one now</a
|
||||
>.
|
||||
</p>
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
<!-- Uploaded Images -->
|
||||
<div>
|
||||
<h2 class="text-2xl font-semibold mb-4 border-b border-gray-700 pb-2">
|
||||
My Uploads
|
||||
</h2>
|
||||
{% if uploads %}
|
||||
<div
|
||||
class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4 gap-6"
|
||||
>
|
||||
{% for image in uploads %}
|
||||
<a
|
||||
href="{{ url_for('upload_detail', image_id=image.id) }}"
|
||||
class="block bg-gray-800 rounded-lg shadow-lg overflow-hidden hover:shadow-2xl transition-shadow duration-300"
|
||||
>
|
||||
<img
|
||||
src="{{ url_for('serve_uploaded_image', image_id=image.id) }}"
|
||||
alt="{{ image.filename }}"
|
||||
class="w-full h-48 object-cover"
|
||||
/>
|
||||
<div class="p-4">
|
||||
<p class="font-bold text-sm truncate">{{ image.filename }}</p>
|
||||
<p class="text-xs text-gray-400 mt-1">
|
||||
Upload ID: {{ image.id[:8] }}...
|
||||
</p>
|
||||
<p class="text-xs text-gray-500 mt-1">
|
||||
{{ image.uploaded_at | fromisoformat | humantime }}
|
||||
</p>
|
||||
</div>
|
||||
</a>
|
||||
{% endfor %}
|
||||
</div>
|
||||
{% else %}
|
||||
<p class="text-gray-400">You haven't uploaded any images.</p>
|
||||
{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Infinite Scroll Loading Indicator -->
|
||||
<div id="loading-indicator" class="flex justify-center py-8 hidden">
|
||||
<div class="spinner"></div>
|
||||
</div>
|
||||
{% endblock %} {% block scripts %}
|
||||
<script>
|
||||
document.addEventListener("DOMContentLoaded", function () {
|
||||
const galleryContainers = document.querySelectorAll(".grid[data-grid]");
|
||||
const loadingIndicator = document.getElementById("loading-indicator");
|
||||
const container = document.querySelector(".container[data-current-page]");
|
||||
const currentPage = parseInt(container.dataset.currentPage);
|
||||
const perPage = parseInt(container.dataset.perPage);
|
||||
let isLoading = false;
|
||||
let hasMore = true;
|
||||
|
||||
// Add data-grid attribute to all gallery grids
|
||||
document
|
||||
.querySelectorAll(".grid")
|
||||
.forEach((grid) => grid.setAttribute("data-grid", ""));
|
||||
|
||||
// Infinite scroll handler
|
||||
window.addEventListener("scroll", async function () {
|
||||
if (!hasMore || isLoading) return;
|
||||
|
||||
const scrollPosition = window.innerHeight + window.scrollY;
|
||||
const bottomThreshold = document.body.offsetHeight - 1000;
|
||||
|
||||
if (scrollPosition >= bottomThreshold) {
|
||||
isLoading = true;
|
||||
loadingIndicator.classList.remove("hidden");
|
||||
// TODO: Implement actual fetching of next page of results and appending to the correct grid(s)
|
||||
// For demo purposes, we'll just simulate a delay and then hide the loading indicator
|
||||
// Simulate API call for next page
|
||||
// In real implementation, replace with actual backend fetch
|
||||
setTimeout(() => {
|
||||
isLoading = false;
|
||||
loadingIndicator.classList.add("hidden");
|
||||
// Real app would fetch /generate/images?page=${currentPage +1}&limit=${perPage}
|
||||
// and /generate/videos similarly
|
||||
}, 1500);
|
||||
}
|
||||
});
|
||||
// Cancel pending video buttons
|
||||
document.querySelectorAll(".cancel-pending-btn").forEach((btn) => {
|
||||
btn.addEventListener("click", async (e) => {
|
||||
e.preventDefault();
|
||||
e.stopPropagation();
|
||||
const videoId = btn.dataset.videoId;
|
||||
const msgEl = btn.parentElement.querySelector(".cancel-pending-msg");
|
||||
btn.disabled = true;
|
||||
btn.textContent = "Cancelling…";
|
||||
try {
|
||||
const resp = await fetch(
|
||||
"/generate/video/" + encodeURIComponent(videoId) + "/cancel",
|
||||
{ method: "POST" },
|
||||
);
|
||||
if (resp.ok) {
|
||||
btn.classList.add("hidden");
|
||||
if (msgEl) {
|
||||
msgEl.textContent = "Cancelled";
|
||||
msgEl.classList.remove("hidden", "text-red-500");
|
||||
msgEl.classList.add("text-gray-300");
|
||||
}
|
||||
const card = document.querySelector(
|
||||
'[data-pending-video-id="' + videoId + '"]',
|
||||
);
|
||||
if (card) {
|
||||
const statusSpan = card.querySelector(".text-yellow-400");
|
||||
if (statusSpan) {
|
||||
statusSpan.textContent = "cancelled";
|
||||
statusSpan.classList.remove("text-yellow-400");
|
||||
statusSpan.classList.add("text-gray-400");
|
||||
}
|
||||
}
|
||||
} else {
|
||||
const data = await resp.json().catch(() => ({}));
|
||||
btn.disabled = false;
|
||||
btn.textContent = "Cancel";
|
||||
if (msgEl) {
|
||||
msgEl.textContent = data.detail || "Failed";
|
||||
msgEl.classList.remove("hidden");
|
||||
msgEl.classList.add("text-red-500");
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
btn.disabled = false;
|
||||
btn.textContent = "Cancel";
|
||||
if (msgEl) {
|
||||
msgEl.textContent = "Error";
|
||||
msgEl.classList.remove("hidden");
|
||||
msgEl.classList.add("text-red-500");
|
||||
}
|
||||
}
|
||||
});
|
||||
});
|
||||
});
|
||||
</script>
|
||||
{% endblock %}
|
||||
</div>
|
||||
@@ -1,5 +1,5 @@
|
||||
{% extends "base.html" %} {% block title %}Generate — AI Allucanget{% endblock
|
||||
%} {% block content %}
|
||||
{% extends "base.html" %} {% block title %}Generate — All You Can GET AI{%
|
||||
endblock %} {% block content %}
|
||||
<div class="card">
|
||||
<h1>Generate</h1>
|
||||
<p class="text-muted">
|
||||
|
||||
@@ -1,53 +1,62 @@
|
||||
{% extends "base.html" %}
|
||||
{% block title %}Image Generation — AI Allucanget{% endblock %}
|
||||
{% block title %}Image Generation — All You Can GET AI{% endblock %}
|
||||
{% block content %}
|
||||
<div class="card">
|
||||
<h1>Image Generation</h1>
|
||||
<form method="post">
|
||||
<form method="post" enctype="multipart/form-data">
|
||||
<label for="model">Model</label>
|
||||
{% if models %}
|
||||
<select id="model" name="model" required>
|
||||
{% for m in models %}
|
||||
<option value="{{ m.id }}" {{ "selected" if request.form.get('model', '') == m.id else "" }}>{{ m.name }}</option>
|
||||
{% endfor %}
|
||||
</select>
|
||||
{% else %}
|
||||
<input id="model" name="model" type="text" required
|
||||
placeholder="e.g. openai/dall-e-3"
|
||||
placeholder="e.g. google/gemini-2.5-flash-image"
|
||||
value="{{ request.form.get('model', '') }}">
|
||||
<p class="text-muted mt-1">No models available</p>
|
||||
{% endif %}
|
||||
|
||||
<label for="prompt">Prompt</label>
|
||||
<textarea id="prompt" name="prompt" rows="4" required
|
||||
placeholder="Describe the image you want…">{{ request.form.get('prompt', '') }}</textarea>
|
||||
|
||||
<label for="size">Size</label>
|
||||
<select id="size" name="size">
|
||||
<option value="1024x1024" {% if request.form.get('size','1024x1024')=='1024x1024' %}selected{% endif %}>1024×1024</option>
|
||||
<option value="1792x1024" {% if request.form.get('size')=='1792x1024' %}selected{% endif %}>1792×1024 (landscape)</option>
|
||||
<option value="1024x1792" {% if request.form.get('size')=='1024x1792' %}selected{% endif %}>1024×1792 (portrait)</option>
|
||||
<option value="512x512" {% if request.form.get('size')=='512x512' %}selected{% endif %}>512×512</option>
|
||||
</select>
|
||||
|
||||
<label for="aspect_ratio">Aspect ratio</label>
|
||||
<select id="aspect_ratio" name="aspect_ratio">
|
||||
<option value="">Auto (default)</option>
|
||||
<option value="1:1" {% if request.form.get('aspect_ratio')=='1:1' %}selected{% endif %}>1:1 (square)</option>
|
||||
<option value="16:9" {% if request.form.get('aspect_ratio')=='16:9' %}selected{% endif %}>16:9 (landscape)</option>
|
||||
<option value="9:16" {% if request.form.get('aspect_ratio')=='9:16' %}selected{% endif %}>9:16 (portrait)</option>
|
||||
<option value="4:3" {% if request.form.get('aspect_ratio')=='4:3' %}selected{% endif %}>4:3</option>
|
||||
<option value="3:4" {% if request.form.get('aspect_ratio')=='3:4' %}selected{% endif %}>3:4</option>
|
||||
<option value="3:2" {% if request.form.get('aspect_ratio')=='3:2' %}selected{% endif %}>3:2</option>
|
||||
<option value="2:3" {% if request.form.get('aspect_ratio')=='2:3' %}selected{% endif %}>2:3</option>
|
||||
<option value="">Auto (default 1:1)</option>
|
||||
<option value="1:1" {{ "selected" if request.form.get('aspect_ratio')=='1:1' else "" }}>1:1 (square)</option>
|
||||
<option value="16:9" {{ "selected" if request.form.get('aspect_ratio')=='16:9' else "" }}>16:9 (landscape)</option>
|
||||
<option value="9:16" {{ "selected" if request.form.get('aspect_ratio')=='9:16' else "" }}>9:16 (portrait)</option>
|
||||
<option value="4:3" {{ "selected" if request.form.get('aspect_ratio')=='4:3' else "" }}>4:3</option>
|
||||
<option value="3:4" {{ "selected" if request.form.get('aspect_ratio')=='3:4' else "" }}>3:4</option>
|
||||
<option value="3:2" {{ "selected" if request.form.get('aspect_ratio')=='3:2' else "" }}>3:2</option>
|
||||
<option value="2:3" {{ "selected" if request.form.get('aspect_ratio')=='2:3' else "" }}>2:3</option>
|
||||
</select>
|
||||
|
||||
<label for="image_size">Resolution</label>
|
||||
<select id="image_size" name="image_size">
|
||||
<option value="">Auto (default)</option>
|
||||
<option value="0.5K" {% if request.form.get('image_size')=='0.5K' %}selected{% endif %}>0.5K (low)</option>
|
||||
<option value="1K" {% if request.form.get('image_size')=='1K' %}selected{% endif %}>1K (standard)</option>
|
||||
<option value="2K" {% if request.form.get('image_size')=='2K' %}selected{% endif %}>2K (high)</option>
|
||||
<option value="4K" {% if request.form.get('image_size')=='4K' %}selected{% endif %}>4K (ultra)</option>
|
||||
<option value="">Auto (default 1K)</option>
|
||||
<option value="0.5K" {{ "selected" if request.form.get('image_size')=='0.5K' else "" }}>0.5K (low)</option>
|
||||
<option value="1K" {{ "selected" if request.form.get('image_size')=='1K' else "" }}>1K (standard)</option>
|
||||
<option value="2K" {{ "selected" if request.form.get('image_size')=='2K' else "" }}>2K (high)</option>
|
||||
<option value="4K" {{ "selected" if request.form.get('image_size')=='4K' else "" }}>4K (ultra)</option>
|
||||
</select>
|
||||
|
||||
<label for="n">Number of images</label>
|
||||
<select id="n" name="n">
|
||||
<option value="1" {% if request.form.get('n','1')=='1' %}selected{% endif %}>1</option>
|
||||
<option value="2" {% if request.form.get('n')=='2' %}selected{% endif %}>2</option>
|
||||
<option value="4" {% if request.form.get('n')=='4' %}selected{% endif %}>4</option>
|
||||
</select>
|
||||
<label for="reference_image">Reference image (optional)</label>
|
||||
<input
|
||||
id="reference_image"
|
||||
name="reference_image"
|
||||
type="file"
|
||||
accept="image/png,image/jpeg,image/webp,image/gif"
|
||||
>
|
||||
<p class="text-muted mt-1" id="reference-image-help">
|
||||
Upload an image to use as visual reference (image-to-image).
|
||||
</p>
|
||||
<div class="image-upload-preview" id="image-upload-preview" hidden>
|
||||
<p class="text-muted" id="image-upload-filename"></p>
|
||||
<img id="image-upload-preview-img" alt="Uploaded reference image preview" class="generated-image">
|
||||
</div>
|
||||
|
||||
<button type="submit">Generate image</button>
|
||||
</form>
|
||||
@@ -60,7 +69,9 @@
|
||||
<div class="result">
|
||||
<h2>Generated image{{ 's' if result.images|length > 1 }}</h2>
|
||||
{% for img in result.images %}
|
||||
{% if img.url %}
|
||||
<img src="{{ img.url }}" alt="Generated image" class="generated-image">
|
||||
{% endif %}
|
||||
{% if img.revised_prompt %}
|
||||
<p class="text-muted mt-1" style="font-size:0.8rem;">{{ img.revised_prompt }}</p>
|
||||
{% endif %}
|
||||
|
||||
@@ -1,44 +1,93 @@
|
||||
{% extends "base.html" %} {% block title %}Text Generation — AI Allucanget{%
|
||||
endblock %} {% block content %}
|
||||
<div class="card">
|
||||
<h1>Text Generation</h1>
|
||||
<form method="post">
|
||||
<label for="model">Model</label>
|
||||
{% extends "base.html" %} {% block title %}Text Generation — All You Can GET
|
||||
AI{% endblock %} {% block content %}
|
||||
<div class="card chat-page">
|
||||
<div class="chat-header">
|
||||
<h1>Text Chat</h1>
|
||||
<form method="post" style="display: inline">
|
||||
<input type="hidden" name="action" value="clear" />
|
||||
<button type="submit" class="btn-secondary btn-sm">New Chat</button>
|
||||
</form>
|
||||
</div>
|
||||
|
||||
<!-- Config row -->
|
||||
<details class="chat-config" {% if not chat_history %}open{% endif %}>
|
||||
<summary>Model & System Prompt</summary>
|
||||
<div class="chat-config-body">
|
||||
<label for="cfg-model">Model</label>
|
||||
{% if models %}
|
||||
<select id="cfg-model" form="chat-form" name="model" required>
|
||||
{% for m in models %}
|
||||
<option value="{{ m.id }}" {{ "selected" if current_model == m.id else "" }}>{{ m.name }}</option>
|
||||
{% endfor %}
|
||||
</select>
|
||||
{% else %}
|
||||
<input
|
||||
id="model"
|
||||
id="cfg-model"
|
||||
form="chat-form"
|
||||
name="model"
|
||||
type="text"
|
||||
required
|
||||
placeholder="e.g. openai/gpt-4o"
|
||||
value="{{ request.form.get('model', '') }}"
|
||||
value="{{ current_model }}"
|
||||
/>
|
||||
<p class="text-muted mt-1">No models available</p>
|
||||
{% endif %}
|
||||
|
||||
<label for="prompt">Prompt</label>
|
||||
<label for="cfg-sys">System prompt (optional)</label>
|
||||
<textarea
|
||||
id="prompt"
|
||||
name="prompt"
|
||||
rows="5"
|
||||
required
|
||||
placeholder="Describe what you want…"
|
||||
id="cfg-sys"
|
||||
form="chat-form"
|
||||
name="system_prompt"
|
||||
rows="2"
|
||||
placeholder="Set behavior/instructions for assistant…"
|
||||
>
|
||||
{{ request.form.get('prompt', '') }}</textarea
|
||||
{{ system_prompt }}</textarea
|
||||
>
|
||||
</div>
|
||||
</details>
|
||||
|
||||
<button type="submit">Generate text</button>
|
||||
</form>
|
||||
|
||||
{% if error %}
|
||||
<div class="alert alert-error mt-2">{{ error }}</div>
|
||||
{% endif %} {% if result %}
|
||||
<div class="result">
|
||||
<h2>Result</h2>
|
||||
<pre>{{ result.content }}</pre>
|
||||
{% if result.usage %}
|
||||
<p class="text-muted mt-1" style="font-size: 0.8rem">
|
||||
Tokens: {{ result.usage.get('total_tokens', '—') }}
|
||||
</p>
|
||||
<!-- Chat history -->
|
||||
<div class="chat-history" id="chat-history">
|
||||
{% if not chat_history %}
|
||||
<p class="chat-empty">No messages yet. Start the conversation below.</p>
|
||||
{% endif %} {% for msg in chat_history %} {% if msg.role == "user" %}
|
||||
<div class="chat-bubble chat-bubble--user">
|
||||
<span class="bubble-role">You</span>
|
||||
<div class="bubble-content">{{ msg.content }}</div>
|
||||
</div>
|
||||
{% elif msg.role == "assistant" %}
|
||||
<div class="chat-bubble chat-bubble--assistant">
|
||||
<span class="bubble-role">Assistant</span>
|
||||
<div class="bubble-content">{{ msg.content }}</div>
|
||||
{% if msg.usage %}
|
||||
<span class="bubble-meta"
|
||||
>{{ msg.usage.get('total_tokens', '') }} tokens</span
|
||||
>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endif %} {% endfor %} {% if error %}
|
||||
<div class="alert alert-error">{{ error }}</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
<!-- Input -->
|
||||
<form id="chat-form" method="post" class="chat-input-row">
|
||||
<input type="hidden" name="action" value="send" />
|
||||
<textarea
|
||||
name="prompt"
|
||||
id="prompt"
|
||||
rows="2"
|
||||
required
|
||||
placeholder="Type a message…"
|
||||
class="chat-input-textarea"
|
||||
></textarea>
|
||||
<button type="submit" class="btn-primary">Send</button>
|
||||
</form>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
// Auto-scroll chat to bottom
|
||||
const hist = document.getElementById("chat-history");
|
||||
if (hist) hist.scrollTop = hist.scrollHeight;
|
||||
</script>
|
||||
{% endblock %}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{% extends "base.html" %} {% block title %}Video Generation — AI Allucanget{%
|
||||
endblock %} {% block content %}
|
||||
{% extends "base.html" %} {% block title %}Video Generation — All You Can GET
|
||||
AI{% endblock %} {% block content %}
|
||||
<div class="card">
|
||||
<h1>Video Generation</h1>
|
||||
|
||||
@@ -19,6 +19,13 @@ endblock %} {% block content %}
|
||||
<input type="hidden" name="mode" value="text" />
|
||||
|
||||
<label for="model-t">Model</label>
|
||||
{% if models %}
|
||||
<select id="model-t" name="model" required>
|
||||
{% for m in models %}
|
||||
<option value="{{ m.id }}" {% if request.form.get('model', '') == m.id and request.form.get('mode','text')=='text' %}selected{% endif %}>{{ m.name }}</option>
|
||||
{% endfor %}
|
||||
</select>
|
||||
{% else %}
|
||||
<input
|
||||
id="model-t"
|
||||
name="model"
|
||||
@@ -27,6 +34,8 @@ endblock %} {% block content %}
|
||||
placeholder="e.g. openai/sora-2-pro"
|
||||
value="{{ request.form.get('model', '') if request.form.get('mode','text')=='text' else '' }}"
|
||||
/>
|
||||
<p class="text-muted mt-1">No models available</p>
|
||||
{% endif %}
|
||||
|
||||
<label for="prompt-t">Prompt</label>
|
||||
<textarea
|
||||
@@ -54,21 +63,14 @@ endblock %} {% block content %}
|
||||
<option value="1080p">1080p</option>
|
||||
</select>
|
||||
|
||||
<label for="duration-t"
|
||||
>Duration: <span id="duration-t-val">5</span>s</label
|
||||
>
|
||||
<input
|
||||
type="range"
|
||||
id="duration-t"
|
||||
name="duration_seconds"
|
||||
min="5"
|
||||
max="60"
|
||||
step="1"
|
||||
value="5"
|
||||
oninput="
|
||||
document.getElementById('duration-t-val').textContent = this.value
|
||||
"
|
||||
/>
|
||||
<label for="duration-t">Duration (seconds)</label>
|
||||
<select id="duration-t" name="duration_seconds">
|
||||
<option value="4">4s</option>
|
||||
<option value="8">8s</option>
|
||||
<option value="12" selected>12s</option>
|
||||
<option value="16">16s</option>
|
||||
<option value="20">20s</option>
|
||||
</select>
|
||||
|
||||
<button type="submit">Generate video</button>
|
||||
</form>
|
||||
@@ -80,6 +82,13 @@ endblock %} {% block content %}
|
||||
<input type="hidden" name="mode" value="image" />
|
||||
|
||||
<label for="model-i">Model</label>
|
||||
{% if models %}
|
||||
<select id="model-i" name="model" required>
|
||||
{% for m in models %}
|
||||
<option value="{{ m.id }}" {% if request.form.get('model', '') == m.id and request.form.get('mode')=='image' %}selected{% endif %}>{{ m.name }}</option>
|
||||
{% endfor %}
|
||||
</select>
|
||||
{% else %}
|
||||
<input
|
||||
id="model-i"
|
||||
name="model"
|
||||
@@ -88,6 +97,8 @@ endblock %} {% block content %}
|
||||
placeholder="e.g. openai/sora-2-pro"
|
||||
value="{{ request.form.get('model', '') if request.form.get('mode')=='image' else '' }}"
|
||||
/>
|
||||
<p class="text-muted mt-1">No models available</p>
|
||||
{% endif %}
|
||||
|
||||
<label for="image_url">Source image URL</label>
|
||||
<input
|
||||
@@ -125,21 +136,14 @@ endblock %} {% block content %}
|
||||
<option value="1080p">1080p</option>
|
||||
</select>
|
||||
|
||||
<label for="duration-i"
|
||||
>Duration: <span id="duration-i-val">5</span>s</label
|
||||
>
|
||||
<input
|
||||
type="range"
|
||||
id="duration-i"
|
||||
name="duration_seconds"
|
||||
min="5"
|
||||
max="60"
|
||||
step="1"
|
||||
value="5"
|
||||
oninput="
|
||||
document.getElementById('duration-i-val').textContent = this.value
|
||||
"
|
||||
/>
|
||||
<label for="duration-i">Duration (seconds)</label>
|
||||
<select id="duration-i" name="duration_seconds">
|
||||
<option value="4">4s</option>
|
||||
<option value="8">8s</option>
|
||||
<option value="12" selected>12s</option>
|
||||
<option value="16">16s</option>
|
||||
<option value="20">20s</option>
|
||||
</select>
|
||||
|
||||
<button type="submit">Generate video from image</button>
|
||||
</form>
|
||||
@@ -151,9 +155,9 @@ endblock %} {% block content %}
|
||||
{% endif %} {% if result %}
|
||||
<div class="result">
|
||||
<h2>Video job</h2>
|
||||
<p>Job ID: <code>{{ result.id }}</code></p>
|
||||
{% if result.status in ('queued', 'processing') and result.polling_url %}
|
||||
<div id="video-poll-status" data-polling-url="{{ result.polling_url }}">
|
||||
<p>Job ID: <code>{{ result.db_id or result.id }}</code></p>
|
||||
{% if result.status in ('queued', 'processing') and result.db_id %}
|
||||
<div id="video-poll-status" data-video-id="{{ result.db_id }}">
|
||||
<p>
|
||||
<span id="poll-status-text"
|
||||
>Status: <strong>{{ result.status }}</strong></span
|
||||
@@ -161,6 +165,13 @@ endblock %} {% block content %}
|
||||
— checking for updates every 5 s…
|
||||
</p>
|
||||
<div id="poll-video-container"></div>
|
||||
<button
|
||||
id="cancel-video-btn"
|
||||
class="mt-2 px-4 py-2 bg-red-600 hover:bg-red-700 text-white rounded-md text-sm"
|
||||
>
|
||||
Cancel Job
|
||||
</button>
|
||||
<p id="cancel-msg" class="text-sm mt-2 hidden"></p>
|
||||
</div>
|
||||
{% elif result.video_url %}
|
||||
<video
|
||||
|
||||
@@ -0,0 +1,35 @@
|
||||
{% extends "base.html" %} {% block title %}Generated Image{% endblock %} {%
|
||||
block content %}
|
||||
<div class="container mx-auto px-4 py-8">
|
||||
<a
|
||||
href="{{ url_for('gallery') }}"
|
||||
class="text-blue-400 hover:underline mb-4 inline-block"
|
||||
>← Back to Gallery</a
|
||||
>
|
||||
|
||||
{% if image %}
|
||||
<h1 class="text-2xl font-bold mb-4">Generated Image</h1>
|
||||
<div class="bg-gray-800 rounded-lg shadow-lg overflow-hidden">
|
||||
<img
|
||||
src="{{ image.image_data }}"
|
||||
alt="{{ image.prompt }}"
|
||||
class="w-full object-contain"
|
||||
/>
|
||||
<div class="p-6">
|
||||
<h2 class="text-xl font-semibold mb-2">Prompt</h2>
|
||||
<p class="text-gray-300 bg-gray-900 p-3 rounded-md">{{ image.prompt }}</p>
|
||||
<div class="mt-4 text-sm text-gray-400">
|
||||
<p><strong>Model:</strong> {{ image.model_id }}</p>
|
||||
<p>
|
||||
<strong>Created:</strong> {{ image.created_at | fromisoformat |
|
||||
humantime }}
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% else %}
|
||||
<h1 class="text-2xl font-bold">Image not found</h1>
|
||||
<p class="text-gray-400 mt-2">Could not find details for this image.</p>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endblock %}
|
||||
@@ -1,14 +1,13 @@
|
||||
{% extends "base.html" %}
|
||||
{% block title %}Log in — AI Allucanget{% endblock %}
|
||||
{% block content %}
|
||||
{% extends "base.html" %} {% block title %}Log in — All You Can GET AI{%
|
||||
endblock %} {% block content %}
|
||||
<div class="card">
|
||||
<h1>Log in</h1>
|
||||
<form method="post">
|
||||
<label for="email">Email</label>
|
||||
<input id="email" name="email" type="email" required autofocus>
|
||||
<input id="email" name="email" type="email" required autofocus />
|
||||
|
||||
<label for="password">Password</label>
|
||||
<input id="password" name="password" type="password" required>
|
||||
<input id="password" name="password" type="password" required />
|
||||
|
||||
<button type="submit">Log in</button>
|
||||
</form>
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{% extends "base.html" %} {% block title %}Profile — AI Allucanget{% endblock %}
|
||||
{% block content %}
|
||||
{% extends "base.html" %} {% block title %}Profile — All You Can GET AI{%
|
||||
endblock %} {% block content %}
|
||||
<div class="card">
|
||||
<h1>Your Profile</h1>
|
||||
|
||||
|
||||
@@ -1,14 +1,19 @@
|
||||
{% extends "base.html" %}
|
||||
{% block title %}Register — AI Allucanget{% endblock %}
|
||||
{% block content %}
|
||||
{% extends "base.html" %} {% block title %}Register — All You Can GET AI{%
|
||||
endblock %} {% block content %}
|
||||
<div class="card">
|
||||
<h1>Create account</h1>
|
||||
<form method="post">
|
||||
<label for="email">Email</label>
|
||||
<input id="email" name="email" type="email" required autofocus>
|
||||
<input id="email" name="email" type="email" required autofocus />
|
||||
|
||||
<label for="password">Password</label>
|
||||
<input id="password" name="password" type="password" required minlength="8">
|
||||
<input
|
||||
id="password"
|
||||
name="password"
|
||||
type="password"
|
||||
required
|
||||
minlength="8"
|
||||
/>
|
||||
|
||||
<button type="submit">Register</button>
|
||||
</form>
|
||||
|
||||
@@ -0,0 +1,40 @@
|
||||
{% extends "base.html" %} {% block title %}Uploaded Image{% endblock %} {% block
|
||||
content %}
|
||||
<div class="container mx-auto px-4 py-8">
|
||||
<a
|
||||
href="{{ url_for('gallery') }}"
|
||||
class="text-blue-400 hover:underline mb-4 inline-block"
|
||||
>← Back to Gallery</a
|
||||
>
|
||||
|
||||
{% if image %}
|
||||
<h1 class="text-2xl font-bold mb-4">Uploaded Image</h1>
|
||||
<div class="bg-gray-800 rounded-lg shadow-lg overflow-hidden">
|
||||
<img
|
||||
src="{{ url_for('serve_uploaded_image', image_id=image.id) }}"
|
||||
alt="{{ image.filename }}"
|
||||
class="w-full object-contain"
|
||||
/>
|
||||
<div class="p-6">
|
||||
<h2 class="text-xl font-semibold mb-2">Details</h2>
|
||||
<div class="mt-4 text-sm text-gray-400">
|
||||
<p><strong>Filename:</strong> {{ image.filename }}</p>
|
||||
<p><strong>Content Type:</strong> {{ image.content_type }}</p>
|
||||
<p>
|
||||
<strong>Size:</strong> {{ (image.size_bytes / 1024) | round(2) }} KB
|
||||
</p>
|
||||
<p>
|
||||
<strong>Uploaded:</strong> {{ image.created_at | fromisoformat |
|
||||
humantime }}
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% else %}
|
||||
<h1 class="text-2xl font-bold">Image not found</h1>
|
||||
<p class="text-gray-400 mt-2">
|
||||
Could not find details for this uploaded image.
|
||||
</p>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endblock %}
|
||||
@@ -0,0 +1,77 @@
|
||||
{% extends "base.html" %} {% block title %}Generated Video{% endblock %} {%
|
||||
block content %}
|
||||
<div class="container mx-auto px-4 py-8">
|
||||
<a
|
||||
href="{{ url_for('gallery') }}"
|
||||
class="text-blue-400 hover:underline mb-4 inline-block"
|
||||
>← Back to Gallery</a
|
||||
>
|
||||
|
||||
{% if video %}
|
||||
<h1 class="text-2xl font-bold mb-4">Video Generation Job</h1>
|
||||
<div class="bg-gray-800 rounded-lg shadow-lg overflow-hidden">
|
||||
{% if video.status == 'completed' and video.video_url %}
|
||||
<video src="{{ video.video_url }}" controls class="w-full"></video>
|
||||
{% elif video.status in ('queued', 'processing') %}
|
||||
<div
|
||||
class="w-full bg-black aspect-video flex flex-col items-center justify-center p-6 text-center"
|
||||
id="video-poll-status"
|
||||
data-video-id="{{ video.id }}"
|
||||
>
|
||||
<p class="text-xl font-semibold">
|
||||
Status: <strong id="poll-status-text">{{ video.status }}</strong>
|
||||
</p>
|
||||
<p class="text-gray-400 mt-2">
|
||||
Your video is being processed. This page will update automatically when
|
||||
it's ready.
|
||||
</p>
|
||||
<div class="spinner mt-4"></div>
|
||||
<button
|
||||
id="cancel-video-btn"
|
||||
class="mt-4 px-4 py-2 bg-red-600 hover:bg-red-700 text-white rounded-md text-sm"
|
||||
>
|
||||
Cancel Job
|
||||
</button>
|
||||
<p id="cancel-msg" class="text-sm mt-2 hidden"></p>
|
||||
</div>
|
||||
{% elif video.status == 'failed' %}
|
||||
<div
|
||||
class="w-full bg-black aspect-video flex flex-col items-center justify-center p-6 text-center"
|
||||
>
|
||||
<p class="text-xl font-semibold text-red-500">Generation Failed</p>
|
||||
<p class="text-gray-400 mt-2">
|
||||
{{ video.error or 'An unknown error occurred.' }}
|
||||
</p>
|
||||
</div>
|
||||
{% else %}
|
||||
<div
|
||||
class="w-full bg-black aspect-video flex flex-col items-center justify-center p-6 text-center"
|
||||
>
|
||||
<p class="text-xl font-semibold">Video Not Available</p>
|
||||
<p class="text-gray-400 mt-2">Status: {{ video.status }}</p>
|
||||
</div>
|
||||
{% endif %}
|
||||
|
||||
<div class="p-6">
|
||||
<h2 class="text-xl font-semibold mb-2">Prompt</h2>
|
||||
<p class="text-gray-300 bg-gray-900 p-3 rounded-md">{{ video.prompt }}</p>
|
||||
<div class="mt-4 text-sm text-gray-400">
|
||||
<p><strong>Model:</strong> {{ video.model_id }}</p>
|
||||
<p><strong>Job ID:</strong> <code>{{ video.job_id }}</code></p>
|
||||
<p>
|
||||
<strong>Created:</strong> {{ video.created_at | fromisoformat |
|
||||
humantime }}
|
||||
</p>
|
||||
<p>
|
||||
<strong>Last Update:</strong> {{ video.updated_at | fromisoformat |
|
||||
humantime }}
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% else %}
|
||||
<h1 class="text-2xl font-bold">Video job not found</h1>
|
||||
<p class="text-gray-400 mt-2">Could not find details for this video job.</p>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endblock %}
|
||||
@@ -1,13 +0,0 @@
|
||||
# Nixpacks configuration for the Flask frontend
|
||||
|
||||
[phases.setup]
|
||||
nixpkgsArchive = "88a9d1386465831607986442fd9c8c0e7a1b2f5"
|
||||
aptPkgs = ["git"]
|
||||
|
||||
[phases.install]
|
||||
# Nixpacks auto-detects Python and runs pip install -r requirements.txt
|
||||
|
||||
[build]
|
||||
|
||||
[deploy]
|
||||
startCommand = "gunicorn frontend.app.main:app --bind 0.0.0.0:5000 --workers 2 --timeout 120"
|
||||
@@ -0,0 +1,2 @@
|
||||
pytest
|
||||
pytest-mock
|
||||
@@ -0,0 +1,7 @@
|
||||
Flask
|
||||
gunicorn
|
||||
httpx
|
||||
itsdangerous
|
||||
Jinja2
|
||||
MarkupSafe
|
||||
Werkzeug
|
||||
+212
-10
@@ -6,7 +6,7 @@ from unittest.mock import MagicMock, patch
|
||||
os.environ.setdefault("FLASK_SECRET_KEY", "test-secret")
|
||||
os.environ.setdefault("BACKEND_URL", "http://backend-mock")
|
||||
|
||||
from frontend.app.main import app # noqa: E402
|
||||
from app.main import app # noqa: E402
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
@@ -148,9 +148,12 @@ def test_dashboard_requires_login(client):
|
||||
|
||||
def test_dashboard_renders_user_info(client):
|
||||
_set_auth(client)
|
||||
mock = _mock_response(
|
||||
me_mock = _mock_response(
|
||||
200, {"id": "1", "email": "u@example.com", "role": "user"})
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
images_mock = _mock_response(200, [])
|
||||
gen_images_mock = _mock_response(200, [])
|
||||
gen_videos_mock = _mock_response(200, [])
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[me_mock, images_mock, gen_images_mock, gen_videos_mock]):
|
||||
resp = client.get("/dashboard")
|
||||
assert resp.status_code == 200
|
||||
assert b"u@example.com" in resp.data
|
||||
@@ -171,7 +174,7 @@ def test_generate_text_page_renders(client):
|
||||
_set_auth(client)
|
||||
resp = client.get("/generate/text")
|
||||
assert resp.status_code == 200
|
||||
assert b"Text Generation" in resp.data
|
||||
assert b"Text Chat" in resp.data
|
||||
|
||||
|
||||
def test_generate_text_requires_login(client):
|
||||
@@ -182,13 +185,108 @@ def test_generate_text_requires_login(client):
|
||||
|
||||
def test_generate_text_success(client):
|
||||
_set_auth(client)
|
||||
mock = _mock_response(
|
||||
gen_mock = _mock_response(
|
||||
200, {"id": "g1", "model": "openai/gpt-4o", "content": "Hello world", "usage": None})
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
models_mock = _mock_response(200, [
|
||||
{"id": "openai/gpt-4o", "name": "GPT-4o", "modality": "text"}
|
||||
])
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[gen_mock, models_mock]):
|
||||
resp = client.post(
|
||||
"/generate/text", data={"model": "openai/gpt-4o", "prompt": "Say hello"})
|
||||
"/generate/text",
|
||||
data={"model": "openai/gpt-4o", "prompt": "Say hello", "action": "send"})
|
||||
assert resp.status_code == 200
|
||||
assert b"Hello world" in resp.data
|
||||
assert b"chat-bubble--assistant" in resp.data
|
||||
|
||||
|
||||
def test_generate_text_page_shows_optional_system_prompt(client):
|
||||
_set_auth(client)
|
||||
models_mock = _mock_response(200, [])
|
||||
with patch("frontend.app.main.httpx.request", return_value=models_mock):
|
||||
resp = client.get("/generate/text")
|
||||
assert resp.status_code == 200
|
||||
assert b"System prompt (optional)" in resp.data
|
||||
assert b'name="system_prompt"' in resp.data
|
||||
|
||||
|
||||
def test_generate_text_forwards_system_prompt(client):
|
||||
_set_auth(client)
|
||||
gen_mock = _mock_response(
|
||||
200, {"id": "g1", "model": "openai/gpt-4o", "content": "Hello world", "usage": None})
|
||||
models_mock = _mock_response(200, [
|
||||
{"id": "openai/gpt-4o", "name": "GPT-4o", "modality": "text"}
|
||||
])
|
||||
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[gen_mock, models_mock]) as mock_request:
|
||||
resp = client.post(
|
||||
"/generate/text",
|
||||
data={
|
||||
"model": "openai/gpt-4o",
|
||||
"prompt": "Say hello",
|
||||
"system_prompt": "You are concise.",
|
||||
"action": "send",
|
||||
},
|
||||
)
|
||||
|
||||
assert resp.status_code == 200
|
||||
first_call_kwargs = mock_request.call_args_list[0].kwargs
|
||||
assert first_call_kwargs["json"]["system_prompt"] == "You are concise."
|
||||
# Messages array sent (not bare prompt)
|
||||
assert "messages" in first_call_kwargs["json"]
|
||||
|
||||
|
||||
def test_generate_text_chat_history_accumulates(client):
|
||||
"""Second message includes prior user+assistant turns in messages array."""
|
||||
_set_auth(client)
|
||||
|
||||
turn1_gen = _mock_response(
|
||||
200, {"id": "g1", "model": "openai/gpt-4o", "content": "Turn 1 reply", "usage": None})
|
||||
turn1_models = _mock_response(
|
||||
200, [{"id": "openai/gpt-4o", "name": "GPT-4o", "modality": "text"}])
|
||||
turn2_gen = _mock_response(
|
||||
200, {"id": "g2", "model": "openai/gpt-4o", "content": "Turn 2 reply", "usage": None})
|
||||
turn2_models = _mock_response(
|
||||
200, [{"id": "openai/gpt-4o", "name": "GPT-4o", "modality": "text"}])
|
||||
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[turn1_gen, turn1_models]):
|
||||
client.post(
|
||||
"/generate/text", data={"model": "openai/gpt-4o", "prompt": "First", "action": "send"})
|
||||
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[turn2_gen, turn2_models]) as mock_req:
|
||||
resp = client.post(
|
||||
"/generate/text", data={"model": "openai/gpt-4o", "prompt": "Second", "action": "send"})
|
||||
|
||||
assert resp.status_code == 200
|
||||
assert b"Turn 1 reply" in resp.data
|
||||
assert b"Turn 2 reply" in resp.data
|
||||
# Backend received 3 messages: First(user), Turn1(assistant), Second(user)
|
||||
sent_messages = mock_req.call_args_list[0].kwargs["json"]["messages"]
|
||||
assert len(sent_messages) == 3
|
||||
assert sent_messages[0]["role"] == "user" and sent_messages[0]["content"] == "First"
|
||||
assert sent_messages[1]["role"] == "assistant"
|
||||
assert sent_messages[2]["role"] == "user" and sent_messages[2]["content"] == "Second"
|
||||
|
||||
|
||||
def test_generate_text_clear_resets_history(client):
|
||||
"""Clear action removes session history and redirects."""
|
||||
_set_auth(client)
|
||||
|
||||
gen_mock = _mock_response(
|
||||
200, {"id": "g1", "model": "openai/gpt-4o", "content": "Reply", "usage": None})
|
||||
models_mock = _mock_response(
|
||||
200, [{"id": "openai/gpt-4o", "name": "GPT-4o", "modality": "text"}])
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[gen_mock, models_mock]):
|
||||
client.post(
|
||||
"/generate/text", data={"model": "openai/gpt-4o", "prompt": "Hi", "action": "send"})
|
||||
|
||||
clear_resp = client.post("/generate/text", data={"action": "clear"})
|
||||
assert clear_resp.status_code == 302
|
||||
|
||||
models_mock2 = _mock_response(
|
||||
200, [{"id": "openai/gpt-4o", "name": "GPT-4o", "modality": "text"}])
|
||||
with patch("frontend.app.main.httpx.request", return_value=models_mock2):
|
||||
get_resp = client.get("/generate/text")
|
||||
assert b"No messages yet" in get_resp.data
|
||||
|
||||
|
||||
def test_generate_image_page_renders(client):
|
||||
@@ -196,6 +294,7 @@ def test_generate_image_page_renders(client):
|
||||
resp = client.get("/generate/image")
|
||||
assert resp.status_code == 200
|
||||
assert b"Image Generation" in resp.data
|
||||
assert b"reference_image" in resp.data
|
||||
|
||||
|
||||
def test_generate_image_success(client):
|
||||
@@ -249,8 +348,11 @@ def test_generate_video_image_mode(client):
|
||||
|
||||
def test_generate_upstream_error_shows_message(client):
|
||||
_set_auth(client)
|
||||
mock = _mock_response(502, {"detail": "OpenRouter error: timeout"})
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
gen_mock = _mock_response(502, {"detail": "OpenRouter error: timeout"})
|
||||
models_mock = _mock_response(200, [
|
||||
{"id": "openai/gpt-4o", "name": "GPT-4o", "modality": "text"}
|
||||
])
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[gen_mock, models_mock]):
|
||||
resp = client.post(
|
||||
"/generate/text", data={"model": "openai/gpt-4o", "prompt": "Hi"})
|
||||
assert resp.status_code == 200
|
||||
@@ -417,4 +519,104 @@ def test_video_generate_renders_polling_ui(client):
|
||||
})
|
||||
assert resp.status_code == 200
|
||||
assert b"video-poll-status" in resp.data
|
||||
assert b"openrouter.ai/api/v1/videos/v1" in resp.data
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Image upload — frontend proxy + dashboard
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_dashboard_shows_uploaded_images(client):
|
||||
_set_auth(client)
|
||||
me_mock = _mock_response(
|
||||
200, {"id": "1", "email": "u@example.com", "role": "user"})
|
||||
images_mock = _mock_response(200, [
|
||||
{"id": "img-1", "filename": "cat.png", "content_type": "image/png",
|
||||
"size_bytes": 1024, "created_at": "2026-04-29T10:00:00"},
|
||||
])
|
||||
gen_images_mock = _mock_response(200, [])
|
||||
gen_videos_mock = _mock_response(200, [])
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[me_mock, images_mock, gen_images_mock, gen_videos_mock]):
|
||||
resp = client.get("/dashboard")
|
||||
assert resp.status_code == 200
|
||||
assert b"cat.png" in resp.data
|
||||
assert b"img-1" in resp.data
|
||||
|
||||
|
||||
def test_dashboard_shows_generated_images(client):
|
||||
_set_auth(client)
|
||||
me_mock = _mock_response(
|
||||
200, {"id": "1", "email": "u@example.com", "role": "user"})
|
||||
images_mock = _mock_response(200, [])
|
||||
gen_images_mock = _mock_response(200, [
|
||||
{
|
||||
"id": "gen-1",
|
||||
"model_id": "google/gemini-2.5-flash-image",
|
||||
"prompt": "A cat on the moon",
|
||||
"image_data": "data:image/png;base64,abc123",
|
||||
"created_at": "2026-04-29T10:00:00",
|
||||
}
|
||||
])
|
||||
gen_videos_mock = _mock_response(200, [])
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[me_mock, images_mock, gen_images_mock, gen_videos_mock]):
|
||||
resp = client.get("/dashboard")
|
||||
assert resp.status_code == 200
|
||||
assert b"Generated images" in resp.data
|
||||
assert b"A cat on the moon" in resp.data
|
||||
assert b"data:image/png;base64,abc123" in resp.data
|
||||
|
||||
|
||||
def test_dashboard_no_images_section_when_empty(client):
|
||||
_set_auth(client)
|
||||
me_mock = _mock_response(
|
||||
200, {"id": "1", "email": "u@example.com", "role": "user"})
|
||||
images_mock = _mock_response(200, [])
|
||||
gen_images_mock = _mock_response(200, [])
|
||||
gen_videos_mock = _mock_response(200, [])
|
||||
with patch("frontend.app.main.httpx.request", side_effect=[me_mock, images_mock, gen_images_mock, gen_videos_mock]):
|
||||
resp = client.get("/dashboard")
|
||||
assert resp.status_code == 200
|
||||
assert b"Uploaded reference images" not in resp.data
|
||||
|
||||
|
||||
def test_serve_uploaded_image_proxy(client):
|
||||
_set_auth(client)
|
||||
img_bytes = b"\x89PNG\r\n\x1a\n"
|
||||
mock = MagicMock()
|
||||
mock.status_code = 200
|
||||
mock.content = img_bytes
|
||||
mock.headers = {"content-type": "image/png"}
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
resp = client.get("/images/img-1/file")
|
||||
assert resp.status_code == 200
|
||||
assert resp.content_type == "image/png"
|
||||
assert resp.data == img_bytes
|
||||
|
||||
|
||||
def test_serve_uploaded_image_requires_login(client):
|
||||
resp = client.get("/images/img-1/file")
|
||||
assert resp.status_code == 302
|
||||
assert "/login" in resp.headers["Location"]
|
||||
|
||||
|
||||
def test_serve_uploaded_image_not_found_proxied(client):
|
||||
_set_auth(client)
|
||||
mock = _mock_response(404, {"detail": "Image not found."})
|
||||
mock.content = b""
|
||||
with patch("frontend.app.main.httpx.request", return_value=mock):
|
||||
resp = client.get("/images/bad-id/file")
|
||||
assert resp.status_code == 404
|
||||
|
||||
|
||||
def test_generate_image_uploads_reference_then_generates(client):
|
||||
_set_auth(client)
|
||||
gen_mock = _mock_response(200, {
|
||||
"id": "g2", "model": "openai/dall-e-3",
|
||||
"images": [{"url": "https://example.com/out.png", "revised_prompt": None, "b64_json": None}]
|
||||
})
|
||||
# No file field → upload branch skipped; only generate call is made
|
||||
with patch("frontend.app.main.httpx.request", return_value=gen_mock):
|
||||
resp = client.post("/generate/image", data={
|
||||
"model": "openai/dall-e-3", "prompt": "A cat", "n": "1", "size": "1024x1024",
|
||||
}, content_type="multipart/form-data")
|
||||
assert resp.status_code == 200
|
||||
assert b"example.com/out.png" in resp.data
|
||||
|
||||
+2
-2
@@ -3,12 +3,12 @@
|
||||
|
||||
# Backend API proxy
|
||||
upstream backend {
|
||||
server 127.0.0.1:8000;
|
||||
server 127.0.0.1:12015;
|
||||
}
|
||||
|
||||
# Frontend proxy
|
||||
upstream frontend {
|
||||
server 127.0.0.1:5000;
|
||||
server 127.0.0.1:12016;
|
||||
}
|
||||
|
||||
server {
|
||||
|
||||
@@ -1,14 +0,0 @@
|
||||
# Nixpacks configuration for ai.allucanget.biz
|
||||
# Shared settings for both backend and frontend services
|
||||
|
||||
[phases.setup]
|
||||
nixpkgsArchive = "88a9d1386465831607986442fd9c8c0e7a1b2f5"
|
||||
aptPkgs = ["git"]
|
||||
|
||||
[phases.install]
|
||||
# Nixpacks auto-detects Python and runs pip install -r requirements.txt
|
||||
# No custom commands needed here
|
||||
|
||||
[build]
|
||||
|
||||
[deploy]
|
||||
+40
-45
@@ -1,46 +1,41 @@
|
||||
annotated-doc==0.0.4
|
||||
annotated-types==0.7.0
|
||||
anyio==4.13.0
|
||||
backports.asyncio.runner==1.2.0
|
||||
bcrypt==3.2.2
|
||||
blinker==1.9.0
|
||||
certifi==2026.4.22
|
||||
cffi==2.0.0
|
||||
click==8.3.3
|
||||
colorama==0.4.6
|
||||
cryptography==47.0.0
|
||||
dnspython==2.8.0
|
||||
duckdb==1.5.2
|
||||
ecdsa==0.19.2
|
||||
email-validator==2.3.0
|
||||
exceptiongroup==1.3.1
|
||||
fastapi==0.136.1
|
||||
Flask==3.1.3
|
||||
h11==0.16.0
|
||||
httpcore==1.0.9
|
||||
httpx==0.28.1
|
||||
idna==3.13
|
||||
iniconfig==2.3.0
|
||||
itsdangerous==2.2.0
|
||||
Jinja2==3.1.6
|
||||
MarkupSafe==3.0.3
|
||||
packaging==26.2
|
||||
anyio
|
||||
bcrypt==4.0.1
|
||||
blinker
|
||||
certifi
|
||||
cffi
|
||||
cryptography
|
||||
dnspython
|
||||
duckdb
|
||||
ecdsa
|
||||
email-validator
|
||||
exceptiongroup
|
||||
fastapi
|
||||
Flask
|
||||
h11
|
||||
httpcore
|
||||
httpx
|
||||
idna
|
||||
iniconfig
|
||||
itsdangerous
|
||||
Jinja2
|
||||
MarkupSafe
|
||||
packaging
|
||||
passlib==1.7.4
|
||||
pluggy==1.6.0
|
||||
pyasn1==0.6.3
|
||||
pycparser==3.0
|
||||
pydantic==2.13.3
|
||||
pydantic_core==2.46.3
|
||||
Pygments==2.20.0
|
||||
pytest==9.0.3
|
||||
pytest-asyncio==1.3.0
|
||||
python-dotenv==1.2.2
|
||||
python-jose==3.5.0
|
||||
rsa==4.9.1
|
||||
six==1.17.0
|
||||
starlette==1.0.0
|
||||
tomli==2.4.1
|
||||
typing-inspection==0.4.2
|
||||
typing_extensions==4.15.0
|
||||
uvicorn==0.46.0
|
||||
Werkzeug==3.1.8
|
||||
pluggy
|
||||
pyasn1
|
||||
pycparser
|
||||
pydantic
|
||||
pydantic_core
|
||||
Pygments
|
||||
pytest
|
||||
pytest-asyncio
|
||||
python-dotenv
|
||||
python-jose
|
||||
rsa
|
||||
six
|
||||
starlette
|
||||
tomli
|
||||
typing-inspection
|
||||
typing_extensions
|
||||
uvicorn
|
||||
Werkzeug
|
||||
|
||||
Reference in New Issue
Block a user