Compare commits

..

15 Commits

Author SHA1 Message Date
zwitschi 02fc5995db feat: increase main layout max-width to enhance content display
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 21:01:36 +02:00
zwitschi 299ad7d943 feat: add video job cancellation functionality and error tracking in generated videos
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 20:04:10 +02:00
zwitschi 3d0a08a8ef feat: remove admin video jobs link from navigation and update card background style
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 19:06:16 +02:00
zwitschi 2ca7ae538f feat: add admin API endpoints for video management, update frontend to use new API routes
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 18:58:04 +02:00
zwitschi 37edef716a feat: implement video job management with retry and delete functionality, enhance video generation status tracking
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 18:27:59 +02:00
zwitschi d5a94947de feat: update documentation with project details, deployment instructions, and database concurrency management
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 18:25:53 +02:00
zwitschi 615b842b03 feat: streamline Coolify deployment guide by removing redundant steps and clarifying environment variable setup 2026-04-29 17:44:35 +02:00
zwitschi 998cc2e472 feat: adjust script block positioning and add TODO for implementing infinite scroll functionality
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 17:38:54 +02:00
zwitschi 81c06ad13b feat: update video jobs API calls to use backend URL and authorization headers
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 17:23:34 +02:00
zwitschi d1c2b6da68 feat: enhance gallery template with infinite scroll and improved video/image displays
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 17:13:23 +02:00
zwitschi 0ae0e6e7fa feat: update README and deployment docs for clarity and Python version requirement
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 17:09:22 +02:00
zwitschi bdb7c7c43a feat: update .gitignore to include logs and generated data directories
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 16:51:37 +02:00
zwitschi bd77d4c43e feat: add admin video jobs management endpoints and UI for listing, cancelling, and purging video jobs
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 16:49:08 +02:00
zwitschi cc96d26b08 feat: enhance video generation responses with database ID and update dashboard to display pending and completed videos
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 16:39:46 +02:00
zwitschi 8e36f48527 feat: enhance database queries with error handling and improve SQL statement readability
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 16:28:22 +02:00
30 changed files with 1643 additions and 685 deletions
+4 -1
View File
@@ -45,5 +45,8 @@ Thumbs.db
# instructions
.github/instructions/
backend/data/
# Logs and generated data
logs/
data/
backend/data/
+55 -13
View File
@@ -1,7 +1,16 @@
# AI
# All You Can GET AI
A multi-modal AI web application. Users can choose between different AI models for text generation, text-to-image, text-to-video, and image-to-video generation, powered by [openrouter.ai](https://openrouter.ai).
Key features:
- Multi-modal AI generation (text, images, videos)
- User authentication and role-based access control
- Admin dashboard for managing users, models, and video jobs
- Gallery for viewing generated images and videos
- Chat interface with message history
- Image upload and preview functionality
## Components
| Component | Technology | Description |
@@ -31,33 +40,58 @@ python -m venv .venv
# Linux/macOS
source .venv/bin/activate
# Install dependencies
# Install core dependencies
pip install -r requirements.txt
# Copy and fill in environment variables
# Install development dependencies
pip install -r backend/requirements-dev.txt
pip install -r frontend/requirements-dev.txt
# Copy environment variables file
cp .env.example .env
# Edit .env file and add your OpenRouter API key and configure other settings
nano .env
```
### Running the backend
### Running the application locally
#### Backend (FastAPI + Uvicorn)
```bash
cd backend
uvicorn app.main:app --reload --port 12015
```
### Running the frontend
#### Frontend (Flask)
```bash
cd frontend
flask --app app.main run --port 12016
flask --app app.main run --port 12016 --debug
```
### Running tests
```bash
# Run all tests
pytest
# Run backend tests only
pytest backend/tests/
# Run frontend tests only
pytest frontend/tests/
```
### Available Environment Variables
| Variable | Description | Default |
| -------------------- | --------------------------- | ------------------- |
| `OPENROUTER_API_KEY` | Your OpenRouter API key | _Required_ |
| `ADMIN_EMAIL` | Default admin user email | `ai@allucanget.biz` |
| `ADMIN_PASSWORD` | Default admin user password | `admin123` |
| `DATABASE_URL` | DuckDB database path | `../data/app.db` |
## Default admin user
On first startup a default admin account is created:
@@ -79,17 +113,25 @@ Deployed on [Coolify](https://coolify.io) using Nixpacks. See [docs/deployment/c
```txt
backend/ FastAPI backend
app/
routers/ API route handlers
services/ Business logic
models/ Pydantic models
tests/
__init__.py Package initialization
db.py Database connection and operations
dependencies.py Dependency injection
main.py FastAPI application entrypoint
models/ Pydantic and database models
routers/ API route handlers (auth, users, admin, generate, gallery)
services/ Business logic for AI generation, users, admin, etc.
tests/ Backend test suite
frontend/ Flask frontend
app/
__init__.py Package initialization
main.py Flask application entrypoint
templates/ Jinja2 HTML templates
static/ CSS, JS, images
tests/
data/ DuckDB database files (gitignored)
docs/ Architecture documentation
tests/ Frontend test suite
data/ DuckDB database files, uploaded media, and generated content
logs/ Application logs
docs/ Architecture documentation (arc42 template)
nginx/ Nginx configuration for Coolify deployment
```
## Documentation
+10
View File
@@ -114,6 +114,16 @@ def _run_migrations(conn: duckdb.DuckDBPyConnection) -> None:
conn.execute("""
ALTER TABLE models_cache ADD COLUMN IF NOT EXISTS output_modalities VARCHAR
""")
# Migration: add video job request params + generation type
conn.execute("""
ALTER TABLE generated_videos ADD COLUMN IF NOT EXISTS request_params VARCHAR
""")
conn.execute("""
ALTER TABLE generated_videos ADD COLUMN IF NOT EXISTS generation_type VARCHAR DEFAULT 'text_to_video'
""")
conn.execute("""
ALTER TABLE generated_videos ADD COLUMN IF NOT EXISTS error VARCHAR
""")
_seed_admin(conn)
+9 -1
View File
@@ -5,7 +5,9 @@ from .routers import ai
from .routers import generate
from .routers import images
from .routers import models
from .db import close_db, init_db
from .db import close_db, get_conn, get_write_lock, init_db
from .services.video_worker import run_worker
import asyncio
import os
from contextlib import asynccontextmanager
@@ -19,7 +21,13 @@ load_dotenv()
@asynccontextmanager
async def lifespan(app: FastAPI):
init_db()
worker_task = asyncio.create_task(run_worker(get_conn(), get_write_lock()))
yield
worker_task.cancel()
try:
await worker_task
except asyncio.CancelledError:
pass
close_db()
+2 -1
View File
@@ -91,7 +91,8 @@ class VideoFromImageRequest(BaseModel):
class VideoResponse(BaseModel):
id: str
id: str # This is the job_id from the provider
db_id: str | None = None # This is the UUID from our generated_videos table
model: str
status: str # "queued" | "processing" | "completed" | "failed"
polling_url: str | None = None
+142 -7
View File
@@ -1,5 +1,5 @@
"""Admin router: operational endpoints for application management."""
from datetime import datetime, timezone
from datetime import datetime, timedelta, timezone
from typing import Any
from fastapi import APIRouter, Depends
@@ -7,6 +7,7 @@ from fastapi import APIRouter, Depends
from ..db import get_conn, get_write_lock
from ..dependencies import require_admin
from ..services import models as models_service
from ..services.models import mark_timed_out_video_jobs
router = APIRouter(prefix="/admin", tags=["admin"])
@@ -20,10 +21,18 @@ async def get_stats(_: dict = Depends(require_admin)) -> dict:
sql_token_count = "SELECT COUNT(*) FROM refresh_tokens"
sql_tokens_active = "SELECT COUNT(*) FROM refresh_tokens WHERE revoked = false AND expires_at > ?"
now = datetime.now(timezone.utc)
total_users = conn.execute(sql_user_count).fetchone()[0]
total_users_row = conn.execute(sql_user_count).fetchone()
total_users = total_users_row[0] if total_users_row else 0
users_by_role = conn.execute(sql_user_counts).fetchall()
total_tokens = conn.execute(sql_token_count).fetchone()[0]
active_tokens = conn.execute(sql_tokens_active, [now]).fetchone()[0]
total_tokens_row = conn.execute(sql_token_count).fetchone()
total_tokens = total_tokens_row[0] if total_tokens_row else 0
active_tokens_row = conn.execute(sql_tokens_active, [now]).fetchone()
active_tokens = active_tokens_row[0] if active_tokens_row else 0
return {
"users": {
"total": total_users,
@@ -41,7 +50,8 @@ async def get_stats(_: dict = Depends(require_admin)) -> dict:
async def db_health(_: dict = Depends(require_admin)) -> dict:
"""Verify DuckDB is reachable."""
conn = get_conn()
result = conn.execute("SELECT 1").fetchone()[0]
result_row = conn.execute("SELECT 1").fetchone()
result = result_row[0] if result_row else 0
return {"status": "ok" if result == 1 else "error"}
@@ -54,9 +64,14 @@ async def purge_tokens(_: dict = Depends(require_admin)) -> dict:
sql_count = "SELECT COUNT(*) FROM refresh_tokens"
sql_delete = "DELETE FROM refresh_tokens WHERE revoked = true OR expires_at <= ?"
async with lock:
before = conn.execute(sql_count).fetchone()[0]
before_row = conn.execute(sql_count).fetchone()
before = before_row[0] if before_row else 0
conn.execute(sql_delete, [now])
after = conn.execute(sql_count).fetchone()[0]
after_row = conn.execute(sql_count).fetchone()
after = after_row[0] if after_row else 0
return {"deleted": before - after, "remaining": after}
@@ -90,3 +105,123 @@ async def refresh_models(
"total_models": status.get("model_count"),
"last_updated": status.get("last_updated"),
}
@router.get("/videos")
async def admin_list_video_jobs(_: dict = Depends(require_admin)) -> list[dict[str, Any]]:
"""Return all video generation jobs across all users."""
conn = get_conn()
rows = conn.execute(
"""
SELECT
v.id, v.job_id, v.user_id, u.email, v.model_id, v.prompt,
v.status, v.video_url, v.created_at, v.updated_at
FROM generated_videos v
LEFT JOIN users u ON v.user_id = u.id
ORDER BY v.created_at DESC
"""
).fetchall()
return [
{
"id": str(row[0]),
"job_id": row[1],
"user_id": str(row[2]),
"user_email": row[3],
"model_id": row[4],
"prompt": row[5],
"status": row[6],
"video_url": row[7],
"created_at": row[8].isoformat() if row[8] else None,
"updated_at": row[9].isoformat() if row[9] else None,
}
for row in rows
]
@router.post("/videos/{job_id}/cancel", status_code=200)
async def admin_cancel_video_job(job_id: str, _: dict = Depends(require_admin)) -> dict[str, str]:
"""Mark a video job as 'cancelled'. Does not stop the provider job."""
conn = get_conn()
lock = get_write_lock()
now = datetime.now(timezone.utc)
async with lock:
conn.execute(
"UPDATE generated_videos SET status = 'cancelled', updated_at = ? WHERE id = ?", [
now, job_id]
)
return {"status": "ok", "job_id": job_id}
@router.post("/videos/purge", status_code=200)
async def admin_purge_video_jobs(_: dict = Depends(require_admin)) -> dict[str, Any]:
"""Delete all completed, failed, or cancelled jobs older than 30 days."""
conn = get_conn()
lock = get_write_lock()
thirty_days_ago = datetime.now(
timezone.utc) - timedelta(days=30)
sql_count = "SELECT COUNT(*) FROM generated_videos"
sql_delete = """
DELETE FROM generated_videos
WHERE status IN ('completed', 'failed', 'cancelled')
AND updated_at < ?
"""
async with lock:
before_row = conn.execute(sql_count).fetchone()
before = before_row[0] if before_row else 0
conn.execute(sql_delete, [thirty_days_ago])
after_row = conn.execute(sql_count).fetchone()
after = after_row[0] if after_row else 0
return {"deleted": before - after, "remaining": after}
@router.post("/videos/timed-out", status_code=200)
async def admin_mark_timed_out(_: dict = Depends(require_admin)) -> dict[str, int]:
"""Mark video jobs that have been in 'queued' or 'processing' status for too long as 'failed'."""
conn = get_conn()
count = mark_timed_out_video_jobs(conn, timeout_minutes=120)
return {"timed_out": count}
@router.post("/videos/{job_id}/retry", status_code=200)
async def admin_retry_video_job(job_id: str, _: dict = Depends(require_admin)) -> dict[str, str]:
"""Reset a failed or cancelled video job back to 'queued' for reprocessing."""
conn = get_conn()
lock = get_write_lock()
now = datetime.now(timezone.utc)
async with lock:
row = conn.execute(
"SELECT status FROM generated_videos WHERE id = ?", [job_id]
).fetchone()
if row is None:
from fastapi import HTTPException
raise HTTPException(status_code=404, detail="Job not found")
if row[0] not in ("failed", "cancelled"):
from fastapi import HTTPException
raise HTTPException(
status_code=400, detail=f"Cannot retry job with status '{row[0]}'")
conn.execute(
"UPDATE generated_videos SET status = 'queued', updated_at = ? WHERE id = ?",
[now, job_id],
)
return {"status": "ok", "job_id": job_id}
@router.delete("/videos/{job_id}", status_code=200)
async def admin_delete_video_job(job_id: str, _: dict = Depends(require_admin)) -> dict[str, str]:
"""Permanently delete a video job record."""
conn = get_conn()
lock = get_write_lock()
async with lock:
row = conn.execute(
"SELECT id FROM generated_videos WHERE id = ?", [job_id]
).fetchone()
if row is None:
from fastapi import HTTPException
raise HTTPException(status_code=404, detail="Job not found")
conn.execute("DELETE FROM generated_videos WHERE id = ?", [job_id])
return {"status": "ok", "job_id": job_id}
+6 -5
View File
@@ -24,7 +24,8 @@ async def register(body: RegisterRequest) -> dict:
try:
user = await register_user(body.email, body.password)
except ValueError as exc:
raise HTTPException(status_code=status.HTTP_409_CONFLICT, detail=str(exc))
raise HTTPException(
status_code=status.HTTP_409_CONFLICT, detail=str(exc))
return {"id": user["id"], "email": user["email"], "role": user["role"]}
@@ -40,7 +41,8 @@ async def login(body: LoginRequest) -> TokenResponse:
jti = str(uuid.uuid4())
await store_refresh_token(user["id"], jti)
return TokenResponse(
access_token=create_access_token(user["id"], user["email"], user["role"]),
access_token=create_access_token(
user["id"], user["email"], user["role"]),
refresh_token=create_refresh_token(user["id"], jti),
)
@@ -73,9 +75,8 @@ async def refresh(body: RefreshRequest) -> TokenResponse:
from ..db import get_conn
conn = get_conn()
row = conn.execute(
"SELECT email, role FROM users WHERE id = ?", [user_id]
).fetchone()
sql_fetch = "SELECT email, role FROM users WHERE id = ?"
row = conn.execute(sql_fetch, [user_id]).fetchone()
if row is None:
raise credentials_error
+82 -93
View File
@@ -1,4 +1,5 @@
"""Generate router: text, image, video, and image-to-video generation."""
import json
from datetime import datetime, timezone
import httpx
@@ -129,15 +130,13 @@ async def generate_image(
user_id = current_user.get("id") or current_user.get("sub")
now = datetime.now(timezone.utc).replace(tzinfo=None)
stored: list[ImageResult] = []
sql_insert = "INSERT INTO generated_images (user_id, model_id, prompt, image_data, created_at) VALUES (?, ?, ?, ?, ?) RETURNING id"
async with get_write_lock():
conn = get_conn()
for img in images:
if img.url:
row = conn.execute(
"""INSERT INTO generated_images (user_id, model_id, prompt, image_data, created_at)
VALUES (?, ?, ?, ?, ?) RETURNING id""",
[user_id, body.model, body.prompt, img.url, now],
).fetchone()
sql_insert, [user_id, body.model, body.prompt, img.url, now],).fetchone()
image_id = str(row[0]) if row else None
else:
image_id = None
@@ -167,13 +166,8 @@ async def list_generated_images(
"""Return all generated images for the current user, newest first."""
user_id = current_user.get("id") or current_user.get("sub")
conn = get_conn()
rows = conn.execute(
"""SELECT id, model_id, prompt, image_data, created_at
FROM generated_images
WHERE user_id = ?
ORDER BY created_at DESC""",
[user_id],
).fetchall()
sql_fetch = "SELECT id, model_id, prompt, image_data, created_at FROM generated_images WHERE user_id = ? ORDER BY created_at DESC"
rows = conn.execute(sql_fetch, [user_id]).fetchall()
return [
{
"id": str(r[0]),
@@ -216,50 +210,32 @@ async def generate_video(
body: VideoRequest,
current_user: dict = Depends(get_current_user),
) -> VideoResponse:
"""Generate a video from a text prompt."""
try:
result = await openrouter.generate_video(
model=body.model,
prompt=body.prompt,
duration_seconds=body.duration_seconds,
aspect_ratio=body.aspect_ratio,
resolution=body.resolution,
)
except httpx.HTTPStatusError as exc:
detail = (
f"OpenRouter API error: {exc.response.status_code} - {exc.response.text}"
)
raise HTTPException(
status_code=status.HTTP_502_BAD_GATEWAY, detail=detail)
except Exception as exc:
raise HTTPException(
status_code=status.HTTP_502_BAD_GATEWAY, detail=f"OpenRouter error: {exc}"
)
"""Queue a text-to-video generation job for background processing."""
user_id = current_user.get("id") or current_user.get("sub")
job_id = result.get("id", "")
polling_url = result.get("polling_url")
job_status = result.get("status", "pending")
now = datetime.now(timezone.utc).replace(tzinfo=None)
request_params = json.dumps({
"model": body.model,
"prompt": body.prompt,
"duration_seconds": body.duration_seconds,
"aspect_ratio": body.aspect_ratio,
"resolution": body.resolution,
})
db_id = None
async with get_write_lock():
conn = get_conn()
conn.execute(
"""INSERT INTO generated_videos (user_id, job_id, model_id, prompt, polling_url, status, created_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)""",
[user_id, job_id, body.model, body.prompt,
polling_url, job_status, now, now],
)
urls = result.get("unsigned_urls") or result.get("video_urls")
row = conn.execute(
"""INSERT INTO generated_videos
(user_id, job_id, model_id, prompt, status, request_params, generation_type, created_at, updated_at)
VALUES (?, ?, ?, ?, 'queued', ?, 'text_to_video', ?, ?) RETURNING id""",
[user_id, "", body.model, body.prompt, request_params, now, now],
).fetchone()
if row:
db_id = str(row[0])
return VideoResponse(
id=job_id,
id="",
db_id=db_id,
model=body.model,
status=job_status,
polling_url=polling_url,
video_urls=urls,
video_url=(urls or [None])[0],
error=result.get("error"),
metadata=result.get("metadata"),
status="queued",
)
@@ -268,51 +244,33 @@ async def generate_video_from_image(
body: VideoFromImageRequest,
current_user: dict = Depends(get_current_user),
) -> VideoResponse:
"""Generate a video from an image and a text prompt."""
try:
result = await openrouter.generate_video_from_image(
model=body.model,
image_url=body.image_url,
prompt=body.prompt,
duration_seconds=body.duration_seconds,
aspect_ratio=body.aspect_ratio,
resolution=body.resolution,
)
except httpx.HTTPStatusError as exc:
detail = (
f"OpenRouter API error: {exc.response.status_code} - {exc.response.text}"
)
raise HTTPException(
status_code=status.HTTP_502_BAD_GATEWAY, detail=detail)
except Exception as exc:
raise HTTPException(
status_code=status.HTTP_502_BAD_GATEWAY, detail=f"OpenRouter error: {exc}"
)
"""Queue an image-to-video generation job for background processing."""
user_id = current_user.get("id") or current_user.get("sub")
job_id = result.get("id", "")
polling_url = result.get("polling_url")
job_status = result.get("status", "pending")
now = datetime.now(timezone.utc).replace(tzinfo=None)
request_params = json.dumps({
"model": body.model,
"image_url": body.image_url,
"prompt": body.prompt,
"duration_seconds": body.duration_seconds,
"aspect_ratio": body.aspect_ratio,
"resolution": body.resolution,
})
db_id = None
async with get_write_lock():
conn = get_conn()
conn.execute(
"""INSERT INTO generated_videos (user_id, job_id, model_id, prompt, polling_url, status, created_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)""",
[user_id, job_id, body.model, body.prompt,
polling_url, job_status, now, now],
)
urls = result.get("unsigned_urls") or result.get("video_urls")
row = conn.execute(
"""INSERT INTO generated_videos
(user_id, job_id, model_id, prompt, status, request_params, generation_type, created_at, updated_at)
VALUES (?, ?, ?, ?, 'queued', ?, 'image_to_video', ?, ?) RETURNING id""",
[user_id, "", body.model, body.prompt, request_params, now, now],
).fetchone()
if row:
db_id = str(row[0])
return VideoResponse(
id=job_id,
id="",
db_id=db_id,
model=body.model,
status=job_status,
polling_url=polling_url,
video_urls=urls,
video_url=(urls or [None])[0],
error=result.get("error"),
metadata=result.get("metadata"),
status="queued",
)
@@ -364,7 +322,7 @@ async def list_generated_videos(
user_id = current_user.get("id") or current_user.get("sub")
conn = get_conn()
rows = conn.execute(
"""SELECT id, job_id, model_id, prompt, polling_url, status, video_url, created_at
"""SELECT id, job_id, model_id, prompt, polling_url, status, video_url, error, created_at
FROM generated_videos
WHERE user_id = ?
ORDER BY created_at DESC""",
@@ -379,7 +337,8 @@ async def list_generated_videos(
"polling_url": r[4],
"status": r[5],
"video_url": r[6],
"created_at": r[7].isoformat() if r[7] else None,
"error": r[7],
"created_at": r[8].isoformat() if r[8] else None,
}
for r in rows
]
@@ -394,7 +353,7 @@ async def get_generated_video(
user_id = current_user.get("id") or current_user.get("sub")
conn = get_conn()
row = conn.execute(
"""SELECT id, job_id, model_id, prompt, polling_url, status, video_url, created_at, updated_at
"""SELECT id, job_id, model_id, prompt, polling_url, status, video_url, error, created_at, updated_at
FROM generated_videos
WHERE id = ? AND user_id = ?""",
[video_id, user_id],
@@ -409,6 +368,36 @@ async def get_generated_video(
"polling_url": row[4],
"status": row[5],
"video_url": row[6],
"created_at": row[7].isoformat() if row[7] else None,
"updated_at": row[8].isoformat() if row[8] else None,
"error": row[7],
"created_at": row[8].isoformat() if row[8] else None,
"updated_at": row[9].isoformat() if row[9] else None,
}
@router.post("/videos/{video_id}/cancel", status_code=200)
async def cancel_video_job(
video_id: str,
current_user: dict = Depends(get_current_user),
) -> dict[str, str]:
"""Mark a video job as 'cancelled' if it belongs to the current user and is not terminal."""
user_id = current_user.get("id") or current_user.get("sub")
conn = get_conn()
row = conn.execute(
"SELECT status FROM generated_videos WHERE id = ? AND user_id = ?",
[video_id, user_id],
).fetchone()
if not row:
raise HTTPException(status_code=404, detail="Video job not found")
job_status = row[0]
if job_status in ("completed", "failed", "cancelled"):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Cannot cancel job with status '{job_status}'",
)
now = datetime.now(timezone.utc).replace(tzinfo=None)
async with get_write_lock():
conn.execute(
"UPDATE generated_videos SET status = 'cancelled', updated_at = ? WHERE id = ?",
[now, video_id],
)
return {"status": "ok", "job_id": video_id}
+23 -28
View File
@@ -35,7 +35,8 @@ def verify_password(plain: str, hashed: str) -> bool:
# --- Tokens ---
def create_access_token(user_id: str, email: str, role: str) -> str:
expire = datetime.now(timezone.utc) + timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
expire = datetime.now(timezone.utc) + \
timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
payload = {
"sub": user_id,
"email": email,
@@ -47,7 +48,8 @@ def create_access_token(user_id: str, email: str, role: str) -> str:
def create_refresh_token(user_id: str, jti: str) -> str:
expire = datetime.now(timezone.utc) + timedelta(days=REFRESH_TOKEN_EXPIRE_DAYS)
expire = datetime.now(timezone.utc) + \
timedelta(days=REFRESH_TOKEN_EXPIRE_DAYS)
payload = {
"sub": user_id,
"jti": jti,
@@ -68,28 +70,25 @@ async def register_user(email: str, password: str) -> dict[str, Any]:
"""Insert a new user. Returns the created user row."""
conn = get_conn()
lock = get_write_lock()
sql_check = "SELECT id FROM users WHERE email = ?"
sql_insert = "INSERT INTO users (email, password_hash) VALUES (?, ?)"
sql_fetch = "SELECT id, email, role FROM users WHERE email = ?"
async with lock:
existing = conn.execute(
"SELECT id FROM users WHERE email = ?", [email]
).fetchone()
existing = conn.execute(sql_check, [email]).fetchone()
if existing:
raise ValueError("Email already registered.")
conn.execute(
"INSERT INTO users (email, password_hash) VALUES (?, ?)",
[email, hash_password(password)],
)
row = conn.execute(
"SELECT id, email, role FROM users WHERE email = ?", [email]
).fetchone()
conn.execute(sql_insert, [email, hash_password(password)],)
row = conn.execute(sql_fetch, [email]).fetchone()
if row is None:
raise RuntimeError("Failed to fetch user after registration.")
return {"id": str(row[0]), "email": row[1], "role": row[2]}
async def authenticate_user(email: str, password: str) -> dict[str, Any] | None:
"""Return user dict if credentials are valid, else None."""
conn = get_conn()
row = conn.execute(
"SELECT id, email, password_hash, role FROM users WHERE email = ?", [email]
).fetchone()
sql_fetch = "SELECT id, email, password_hash, role FROM users WHERE email = ?"
row = conn.execute(sql_fetch, [email]).fetchone()
if row is None or not verify_password(password, row[2]):
return None
return {"id": str(row[0]), "email": row[1], "role": row[3]}
@@ -99,34 +98,30 @@ async def store_refresh_token(user_id: str, jti: str) -> None:
"""Persist a refresh token JTI in the database."""
conn = get_conn()
lock = get_write_lock()
sql_insert = "INSERT INTO refresh_tokens (jti, user_id, expires_at) VALUES (?, ?, ?)"
from datetime import timedelta
expires_at = datetime.now(timezone.utc) + timedelta(days=REFRESH_TOKEN_EXPIRE_DAYS)
expires_at = datetime.now(timezone.utc) + \
timedelta(days=REFRESH_TOKEN_EXPIRE_DAYS)
async with lock:
conn.execute(
"INSERT INTO refresh_tokens (jti, user_id, expires_at) VALUES (?, ?, ?)",
[jti, user_id, expires_at],
)
conn.execute(sql_insert, [jti, user_id, expires_at])
async def revoke_refresh_token(jti: str) -> None:
"""Mark a refresh token as revoked."""
conn = get_conn()
lock = get_write_lock()
sql_update = "UPDATE refresh_tokens SET revoked = true WHERE jti = ?"
async with lock:
conn.execute(
"UPDATE refresh_tokens SET revoked = true WHERE jti = ?", [jti]
)
conn.execute(sql_update, [jti])
async def validate_refresh_token_jti(jti: str, user_id: str) -> bool:
"""Return True if the JTI exists, is not revoked, and belongs to user_id."""
conn = get_conn()
now = datetime.now(timezone.utc)
row = conn.execute(
"""
sql_select = """
SELECT 1 FROM refresh_tokens
WHERE jti = ? AND user_id = ? AND revoked = false AND expires_at > ?
""",
[jti, user_id, now],
).fetchone()
"""
row = conn.execute(sql_select, [jti, user_id, now]).fetchone()
return row is not None
+37
View File
@@ -207,3 +207,40 @@ def get_cache_status(conn: duckdb.DuckDBPyConnection) -> dict[str, Any]:
).fetchone()
last_updated, model_count = (row[0], row[1]) if row else (None, 0)
return {"last_updated": last_updated, "model_count": model_count}
def mark_timed_out_video_jobs(conn: duckdb.DuckDBPyConnection, timeout_minutes: int = 120) -> int:
"""Mark video jobs that have been in 'queued' or 'processing' status for too long as 'failed'.
Returns the number of jobs marked as timed out.
"""
timeout_threshold = datetime.now(
timezone.utc) - timedelta(minutes=timeout_minutes)
# Find timed out jobs
timed_out_rows = conn.execute(
"""
SELECT id FROM generated_videos
WHERE status IN ('queued', 'processing')
AND updated_at < ?
""",
[timeout_threshold]
).fetchall()
if not timed_out_rows:
return 0
job_ids = [row[0] for row in timed_out_rows]
placeholders = ",".join(["?"] * len(job_ids))
# Update them to failed
conn.execute(
f"""
UPDATE generated_videos
SET status = 'failed', updated_at = ?
WHERE id IN ({placeholders})
""",
[datetime.now(timezone.utc)] + job_ids
)
return len(job_ids)
+159
View File
@@ -0,0 +1,159 @@
"""Background worker: processes queued/processing video generation jobs."""
import asyncio
import json
import logging
from datetime import datetime, timezone
import duckdb
from . import openrouter
from .models import mark_timed_out_video_jobs
logger = logging.getLogger(__name__)
# Interval between worker ticks (seconds)
WORKER_INTERVAL = 15
# Jobs to process per tick (prevents unbounded bursts)
BATCH_SIZE = 5
async def process_queued_jobs(conn: duckdb.DuckDBPyConnection, lock: asyncio.Lock) -> int:
"""Submit queued jobs to OpenRouter and transition them to 'processing'."""
rows = conn.execute(
"""SELECT id, generation_type, request_params
FROM generated_videos
WHERE status = 'queued' AND request_params IS NOT NULL
ORDER BY created_at ASC
LIMIT ?""",
[BATCH_SIZE],
).fetchall()
processed = 0
for row in rows:
db_id, generation_type, raw_params = str(row[0]), row[1], row[2]
try:
params = json.loads(raw_params)
except (json.JSONDecodeError, TypeError):
logger.error("Bad request_params for video job %s", db_id)
continue
try:
if generation_type == "image_to_video":
result = await openrouter.generate_video_from_image(
model=params["model"],
image_url=params.get("image_url", ""),
prompt=params.get("prompt", ""),
duration_seconds=params.get("duration_seconds"),
aspect_ratio=params.get("aspect_ratio", "16:9"),
resolution=params.get("resolution"),
)
else:
result = await openrouter.generate_video(
model=params["model"],
prompt=params.get("prompt", ""),
duration_seconds=params.get("duration_seconds"),
aspect_ratio=params.get("aspect_ratio", "16:9"),
resolution=params.get("resolution"),
)
except Exception as exc:
logger.warning("OpenRouter call failed for job %s: %s", db_id, exc)
now = datetime.now(timezone.utc).replace(tzinfo=None)
async with lock:
conn.execute(
"UPDATE generated_videos SET status = 'failed', error = ?, updated_at = ? WHERE id = ?",
[str(exc), now, db_id],
)
continue
job_id = result.get("id", "")
polling_url = result.get("polling_url")
new_status = result.get("status", "processing")
# Normalise terminal statuses returned immediately (rare but possible)
if new_status not in ("queued", "processing", "completed", "failed", "cancelled"):
new_status = "processing"
urls = result.get("unsigned_urls") or result.get("video_urls")
video_url = (urls or [None])[0]
now = datetime.now(timezone.utc).replace(tzinfo=None)
async with lock:
conn.execute(
"""UPDATE generated_videos
SET job_id = ?, polling_url = ?, status = ?, video_url = ?, updated_at = ?
WHERE id = ?""",
[job_id, polling_url, new_status, video_url, now, db_id],
)
processed += 1
logger.info("Video job %s%s (provider id: %s)",
db_id, new_status, job_id)
return processed
async def process_processing_jobs(conn: duckdb.DuckDBPyConnection, lock: asyncio.Lock) -> int:
"""Poll in-progress jobs and update to 'completed' or 'failed'."""
rows = conn.execute(
"""SELECT id, polling_url
FROM generated_videos
WHERE status = 'processing' AND polling_url IS NOT NULL
ORDER BY updated_at ASC
LIMIT ?""",
[BATCH_SIZE],
).fetchall()
updated = 0
for row in rows:
db_id, polling_url = str(row[0]), row[1]
try:
result = await openrouter.poll_video_status(polling_url)
except Exception as exc:
logger.warning("Polling failed for job %s: %s", db_id, exc)
continue
job_status = result.get("status", "processing")
if job_status not in ("completed", "failed"):
continue # still in-progress — check again next tick
urls = result.get("unsigned_urls") or result.get("video_urls")
video_url = (urls or [None])[0]
error_msg = result.get("error")
now = datetime.now(timezone.utc).replace(tzinfo=None)
async with lock:
conn.execute(
"""UPDATE generated_videos
SET status = ?, video_url = ?, error = ?, updated_at = ?
WHERE id = ?""",
[job_status, video_url, error_msg, now, db_id],
)
updated += 1
logger.info("Video job %s%s", db_id, job_status)
return updated
async def worker_tick(conn: duckdb.DuckDBPyConnection, lock: asyncio.Lock) -> None:
"""Single worker tick: submit queued, poll processing, expire timed-out."""
queued = await process_queued_jobs(conn, lock)
polled = await process_processing_jobs(conn, lock)
async with lock:
timed_out = mark_timed_out_video_jobs(conn, timeout_minutes=120)
if queued or polled or timed_out:
logger.info(
"Worker tick: submitted=%d polled=%d timed_out=%d",
queued, polled, timed_out,
)
async def run_worker(conn: duckdb.DuckDBPyConnection, lock: asyncio.Lock) -> None:
"""Infinite loop: run a worker tick every WORKER_INTERVAL seconds."""
logger.info("Video worker started (interval=%ds)", WORKER_INTERVAL)
while True:
try:
await worker_tick(conn, lock)
except asyncio.CancelledError:
logger.info("Video worker stopped.")
return
except Exception as exc:
logger.exception("Unexpected error in video worker: %s", exc)
await asyncio.sleep(WORKER_INTERVAL)
+4 -1
View File
@@ -4,7 +4,8 @@ Describes the relevant requirements and the driving forces that software archite
## Requirements Overview
**Project name**: All You Can GET AI Biz
**Project name**: All You Can GET AI
**URL**: [https://ai.allucanget.biz](https://ai.allucanget.biz)
**Purpose**: Provide AIpowered text, image, and video generation services via a web application.
Users can choose between different AI models for:
@@ -14,6 +15,8 @@ Users can choose between different AI models for:
- Texttovideo generation
- Imagetovideo generation
Users can create accounts, log in, and view their generation history in a gallery. An admin dashboard allows managing users, models, and video generation jobs.
## Quality Goals
| Priority | Quality Goal | Scenario |
+1 -1
View File
@@ -22,5 +22,5 @@ Any requirement that constrains software architects in their freedom of design a
| Convention | Background / Motivation |
| -------------------- | --------------------------------------------------- |
| Python 3.11+ | Modern language features, type hints |
| Python 3.12+ | Modern language features, type hints |
| pytest for all tests | Consistent test tooling across backend and frontend |
+23 -10
View File
@@ -5,21 +5,21 @@ Static decomposition of the system into building blocks (modules, components, su
## Level 1 Whitebox Overall System
```text
┌───────────────────────┐
┌───────────────────────
│ Frontend (Flask) │
└───────┬───────────────┘
└───────┬───────────────
│ REST API calls
┌───────▼───────────────┐
┌───────▼───────────────
│ FastAPI Backend │
│ ├─ Auth Service │
│ ├─ User Service │
│ ├─ AI Service │
│ └─ DB Service (DuckDB)│
└───────┬───────────────┘
└───────┬───────────────
│ DB access
┌───────▼───────────────┐
┌───────▼───────────────
│ DuckDB Database │
└───────────────────────┘
└───────────────────────
```
**Motivation:** Separating the UI (Flask) from the API (FastAPI) allows independent scaling and testing of each layer.
@@ -66,17 +66,25 @@ Self-service profile management and admin user CRUD.
Operational endpoints for application management.
| Method | Path | Auth required | Admin only | Description |
| ------ | --------------------- | ------------- | ---------- | ------------------------------------- |
| ------ | --------------------------- | ------------- | ---------- | ------------------------------------------ |
| GET | `/admin/stats` | ✓ | ✓ | User counts by role, token activity |
| GET | `/admin/health/db` | ✓ | ✓ | DuckDB connectivity check |
| POST | `/admin/tokens/purge` | ✓ | ✓ | Remove expired/revoked refresh tokens |
| GET | `/admin/videos` | ✓ | ✓ | List all video jobs with user emails |
| POST | `/admin/videos/{id}/cancel` | ✓ | ✓ | Cancel a queued/processing video job |
| POST | `/admin/videos/{id}/retry` | ✓ | ✓ | Retry a failed/cancelled video job |
| DELETE | `/admin/videos/{id}` | ✓ | ✓ | Permanently delete a video job |
| POST | `/admin/videos/purge` | ✓ | ✓ | Delete old completed/failed/cancelled jobs |
| POST | `/admin/videos/timed-out` | ✓ | ✓ | Mark stale processing jobs as failed |
| GET | `/admin/models` | ✓ | ✓ | List cached OpenRouter models |
| POST | `/admin/models/refresh` | ✓ | ✓ | Refresh model cache from OpenRouter |
### White Box AI Service (`/ai`, `/generate`)
Model listing and multi-modal generation via openrouter.ai.
| Method | Path | Auth required | Description |
| ------ | ---------------------------- | ------------- | ------------------------------------------------------------------------------------------------------------------- |
| ------ | ------------------------------ | ------------- | ------------------------------------------------------------------------------------------------------------------- |
| GET | `/ai/models` | ✓ | List available OpenRouter models |
| POST | `/ai/chat` | ✓ | Multi-turn chat completion |
| POST | `/generate/text` | ✓ | Single-prompt text generation (optional system prompt) |
@@ -84,10 +92,15 @@ Model listing and multi-modal generation via openrouter.ai.
| POST | `/generate/video` | ✓ | Text-to-video (Sora 2 Pro, Veo 3.1 Fast) — returns `polling_url` |
| POST | `/generate/video/from-image` | ✓ | Image-to-video — returns `polling_url` |
| GET | `/generate/video/status` | ✓ | Poll video generation status via `polling_url` |
| GET | `/generate/images` | ✓ | List current user's generated images |
| GET | `/generate/images/{id}` | ✓ | Get a single generated image |
| GET | `/generate/videos` | ✓ | List current user's video jobs |
| GET | `/generate/videos/{id}` | ✓ | Get a single video job |
| POST | `/generate/videos/{id}/cancel` | ✓ | Cancel a queued/processing video job |
**Video generation flow:** The `/generate/video` and `/generate/video/from-image` endpoints submit a job to OpenRouter's `/api/v1/videos` endpoint and return immediately with `status: "queued"` and a `polling_url`. Clients poll `/generate/video/status?polling_url=...` every 5 seconds until `status` is `"completed"` (returns `unsigned_urls`) or `"failed"`.
**Video generation flow:** The `/generate/video` and `/generate/video/from-image` endpoints queue a job in the local database and return immediately with `status: "queued"`. A background worker (`video_worker.py`) submits the job to OpenRouter's `/api/v1/videos` endpoint, receives a `polling_url`, and polls it periodically until the job reaches `"completed"` or `"failed"`. The frontend polls `GET /generate/video/{id}/status` every 5 seconds to show live status updates.
**Image generation routing:** The router auto-detects the model type — models containing `"flux"` or `"gpt-5-image-mini"` are routed to `/chat/completions` with `modalities: ["image"]`, while others (e.g. DALL-E 3) use the legacy `/images/generations` endpoint.
**Image generation routing:** The router auto-detects the model type — models containing `"flux"` or `"gpt-5-image-mini"` are routed to `/chat/completions` with `modalities: ["image"]` (or `["image", "text"]` depending on cached output modalities), while others (e.g. DALL-E 3) use the legacy `/images/generations` endpoint.
### White Box DB Service (`db.py`)
+20 -9
View File
@@ -48,20 +48,31 @@ Describes concrete behavior and interactions of the system's building blocks in
1. User submits video generation form with prompt, model, aspect ratio, resolution, and duration
2. Flask POSTs to `POST /generate/video` with JWT header
3. Auth Service validates JWT
4. Backend calls OpenRouter `POST /api/v1/videos` with model, prompt, aspect_ratio, resolution, duration_seconds
5. OpenRouter returns `{"id": "...", "polling_url": "..."}` with `status: "queued"`
6. FastAPI returns `VideoResponse` with `polling_url` to Flask
7. Flask renders result page with polling UI
8. Frontend JavaScript polls `GET /generate/video/status?polling_url=...` every 5 seconds
9. When `status` becomes `"completed"`, the response includes `unsigned_urls` — the video is displayed in a `<video>` element
10. If `status` becomes `"failed"`, an error message is shown
4. Backend inserts a row into `generated_videos` with `status: "queued"` and returns the DB job ID
5. Flask renders result page with polling UI
6. Background worker (`video_worker.py`) picks up queued jobs every 15 seconds:
- Calls OpenRouter `POST /api/v1/videos` with model, prompt, and parameters
- Receives `{"id": "...", "polling_url": "..."}` and updates the DB row to `status: "processing"`
- Polls the `polling_url` every 15 seconds until `status` is `"completed"` or `"failed"`
- Updates the DB row with the final status and video URL
7. Frontend JavaScript polls `GET /generate/video/{db_id}/status` every 5 seconds
8. When `status` becomes `"completed"`, the response includes `video_url` — the video is displayed in a `<video>` element
9. If `status` becomes `"failed"`, an error message is shown
10. User can click "Cancel Job" to mark the job as `"cancelled"` (stops local polling, does not stop the provider job)
## Scenario 4a: Video Generation (Image-to-Video)
1. User provides an image URL, motion prompt, model, aspect ratio, resolution, and duration
2. Flask POSTs to `POST /generate/video/from-image` with JWT header
3. Backend calls OpenRouter `POST /api/v1/videos` with `image_url`, prompt, and parameters
4. Same polling flow as Scenario 4
3. Same background worker flow as Scenario 4, with `generation_type: "image_to_video"`
## Scenario 4b: Video Job Cancellation
1. User clicks "Cancel Job" on the video detail page or gallery pending card
2. Frontend POSTs to `/generate/video/{id}/cancel`
3. Backend verifies the job belongs to the user and is not in a terminal state
4. Backend updates the DB row `status` to `"cancelled"`
5. Frontend stops polling and updates the UI to show "Job cancelled"
## Scenario 5: Token Refresh
+54 -46
View File
@@ -5,34 +5,45 @@ Describes:
1. Technical infrastructure used to execute your system, with infrastructure elements like geographical locations, environments, computers, processors, channels and net topologies.
2. Mapping of (software) building blocks to that infrastructure elements.
**See**: [Coolify Deployment Guide](./deployment/coolify.md) for detailed instructions.
## Infrastructure Level 1
```text
┌────────────────────────────────────────────┐
│ Host / VM │
│ ┌─────────────┐ ┌────────────────────┐ │
│ │ frontend │ │ backend │ │
│ │ (Flask) │ │ (FastAPI) │ │
│ │ :12016 │ │ :12015 │ │
│ └──────┬──────┘ └─────────┬──────────┘ │
│ │ │ │
│ └────────┬──────────┘ │
│ │ │
│ ┌───────▼────────┐ │
│ │ db (DuckDB) │ │
│ │ data/app.db │ │
│ └────────────────┘ │
└────────────────────────────────────────────┘
Hosted on a single VM running docker containers, deployed via Coolify with Nixpacks to 192.168.88.18 for production.
Containers run behind nginx at 192.168.88.11 which handles TLS termination and reverse proxying to the frontend on port 12016 and backend on port 12015. The database is a file on the host filesystem at `data/app.db` accessed by the backend service.
```mermaid
graph TD
Users[Users / Internet]
Nginx[nginx reverse proxy\nTLS termination]
Users -->|HTTPS| Nginx
subgraph Coolify Server
direction TB
subgraph AI Frontend
AI_Frontend[AI Frontend\nFlask\nServes HTML/CSS/JS UI]
end
subgraph AI Backend
AI_Backend[AI Backend\nFastAPI\nCommunicates with openrouter.ai API]
db[(DuckDB Database\nFile: data/app.db)]
AI_Backend --> db
end
AI_Frontend -->|BACKEND_URL:12015| AI_Backend
end
Nginx -->|12016| AI_Frontend
```
**Motivation:** All three components run on a single VM (or as Docker containers) for simplicity and low operational overhead.
**Motivation:** All three components run as Docker containers for simplicity and low operational overhead.
**Quality and/or Performance Features:** The frontend and backend are stateless; DuckDB persists data on the host filesystem.
**Mapping of Building Blocks to Infrastructure:**
| Building Block | Container / Process | Port |
| --------------- | ---------------------------- | ----- |
| --------------- | ---------------------------- | --------------- |
| Nginx | `nginx` | 80/443 (public) |
| Coolify Server | `coolify` | — |
| Flask frontend | `frontend` | 12016 |
| FastAPI backend | `backend` | 12015 |
| DuckDB | File on host (`data/app.db`) | — |
@@ -41,35 +52,32 @@ Describes:
### Coolify with Nixpacks (Production)
Both services are deployed as separate Nixpacks resources in Coolify:
Both services are deployed as separate Nixpacks resources in Coolify, which results in two separate containers running on the same host. The database is a file on the host filesystem, mounted as a volume in the backend container.
```text
┌──────────────────────────────────────────────────────────┐
│ Coolify Server │
│ ┌────────────────────────────┐ │
│ │ Backend Service (FastAPI) │ │
│ │ - Base Dir: /backend │ │
│ │ - Port: 12015 │ │
│ │ - Volume: /app/data │ │
│ ├────────────────────────────┤ │
│ │ Frontend Service (Flask) │ │
│ │ - Base Dir: /frontend │ │
│ │ - Port: 12016 (public) │ │
│ │ - BACKEND_URL: :12015 │ │
│ └────────────────────────────┘ │
│ ▲ │
│ Coolify reverse proxy (TLS termination) │
└──────────────────────────────────────────────────────────┘
Users / Internet
#### Frontend
```mermaid
graph TD
subgraph Coolify Server
direction TB
subgraph AI Frontend
AI_Frontend[AI Frontend\nNixpacks\nBase Dir: /frontend]
end
end
Users[Users / Internet] -->|HTTPS| AI_Frontend
```
**Deployment Steps:**
#### Backend
1. Create backend Nixpacks service in Coolify with Base Directory `/backend`
2. Create frontend Nixpacks service with Base Directory `/frontend`
3. Set environment variables per service
4. Attach domain to frontend on port `12016`
5. Enable Auto HTTPS in Coolify
**See**: [Coolify Deployment Guide](./deployment/coolify.md) for detailed instructions.
```mermaid
graph TD
subgraph Coolify Server
direction TB
subgraph AI Backend
AI_Backend[AI Backend\nNixpacks\nBase Dir: /backend]
db[(DuckDB Database\nVolume: /app/data)]
AI_Backend --> db
end
end
Frontend[Frontend Container] -->|BACKEND_URL:12015| AI_Backend
```
+8 -69
View File
@@ -4,6 +4,14 @@ Describes crosscutting concepts (practices, patterns, regulations or solution id
> Pick **only** the most-needed topics for your system.
## OpenRouter API Integration
see [docs/8.1-openrouter.md](./8.1-openrouter.md) for details on how the backend integrates with OpenRouter for multi-modal AI generation, including image and video generation flows.
## DuckDB Concurrency and Storage
See [docs/8.2-duckdb.md](./8.2-duckdb.md) for details on how the backend handles concurrent access to DuckDB and manages the database file on the host filesystem.
## Security
- All API endpoints (except `/auth/login`) require a valid JWT in the `Authorization: Bearer` header.
@@ -25,72 +33,3 @@ Describes crosscutting concepts (practices, patterns, regulations or solution id
- All secrets (API keys, DB path, JWT secret) loaded from environment variables or `.env` file.
- No secrets committed to source control.
## DuckDB Concurrency and Storage
### Single Writer Per Process
DuckDB allows only one process to open the database file in read-write mode at a time. The FastAPI backend must be run with a single worker (`uvicorn --workers 1`). Running multiple workers against the same DuckDB file will cause startup errors.
### asyncio.Lock for Writes
All database write operations (`INSERT`, `UPDATE`, `DELETE`) in the FastAPI async context are wrapped in a single `asyncio.Lock` (`get_write_lock()` from `backend/app/db.py`). This prevents concurrent coroutines from issuing overlapping writes within the single process, which would otherwise raise DuckDB optimistic concurrency errors.
Read operations (`SELECT`) do not require the lock — DuckDB's MVCC provides consistent read snapshots.
### Schema
```sql
CREATE TABLE users (
id UUID DEFAULT uuid() PRIMARY KEY,
email VARCHAR NOT NULL UNIQUE,
password_hash VARCHAR NOT NULL,
role VARCHAR DEFAULT 'user',
created_at TIMESTAMP DEFAULT now(),
updated_at TIMESTAMP DEFAULT now()
);
CREATE TABLE refresh_tokens (
jti UUID DEFAULT uuid() PRIMARY KEY,
user_id UUID NOT NULL, -- soft FK to users.id
issued_at TIMESTAMP DEFAULT now(),
expires_at TIMESTAMP NOT NULL,
revoked BOOLEAN DEFAULT false
);
```
> The `REFERENCES users(id)` foreign key is intentionally omitted from `refresh_tokens`. DuckDB fires FK checks on `UPDATE` of the parent table (including email changes), causing false constraint violations. Referential integrity is enforced manually: deleting a user also deletes their refresh tokens in the same write transaction.
### Access Tokens
Access tokens are **stateless** JWTs — not stored in the database. They are validated by signature and expiry claim only. The short TTL (15 minutes) limits the blast radius if a token is leaked.
### Refresh Tokens
Refresh tokens store a JTI (JWT ID) UUID in the `refresh_tokens` table. On each use the old JTI is revoked and a new one issued (rotation). On logout the JTI is immediately revoked. Expired and revoked tokens can be purged via `POST /admin/tokens/purge`.
### Future: AI Generation History
AI generation metadata (model, prompt, cost, result URLs) can be stored as JSON columns in a future `generation_history` table in DuckDB, enabling per-user analytics and usage dashboards at zero extra infrastructure cost.
## OpenRouter API Integration
### Image Generation
Image generation uses two different OpenRouter endpoints depending on the model:
- **Legacy endpoint** (`/images/generations`): Used by DALL-E 3 and similar models. Returns `data[].url` and `data[].b64_json`.
- **Chat completions** (`/chat/completions` with `modalities: ["image"]`): Used by FLUX.2 Klein 4B and GPT-5 Image Mini. Returns `choices[0].message.images[].image_url.url` as base64 data URLs.
The router auto-detects the model type and routes accordingly. Image configuration (`aspect_ratio`, `image_size`) is passed via `image_config` for chat-based models.
### Video Generation
Video generation uses OpenRouter's `/api/v1/videos` endpoint with a **submit-and-poll** pattern:
1. `POST /api/v1/videos` with `model`, `prompt`, `aspect_ratio`, `resolution`, `duration_seconds`
2. Response: `{"id": "job_id", "polling_url": "https://..."}` with `status: "queued"`
3. Poll `GET polling_url` every 5 seconds until `status` is `"completed"` or `"failed"`
4. Completed response includes `unsigned_urls: [str]` array with video download URLs
Supported models: `openai/sora-2-pro`, `google/veo-3.1-fast`. Both text-to-video and image-to-video use the same `/api/v1/videos` endpoint (image-to-video includes `image_url` in the request body).
+31
View File
@@ -0,0 +1,31 @@
# OpenRouter API Integration
## Text Generation
> [!warning]
> TODO: Add more details on how the backend integrates with OpenRouter for text generation, including chat completions and single-prompt generation flows.
## Image Generation
Image generation uses two different OpenRouter endpoints depending on the model:
- **Legacy endpoint** (`/images/generations`): Used by DALL-E 3 and similar models. Returns `data[].url` and `data[].b64_json`.
- **Chat completions** (`/chat/completions` with `modalities: ["image"]`): Used by FLUX.2 Klein 4B and GPT-5 Image Mini. Returns `choices[0].message.images[].image_url.url` as base64 data URLs.
The router auto-detects the model type and routes accordingly. Image configuration (`aspect_ratio`, `image_size`) is passed via `image_config` for chat-based models.
## Video Generation
Video generation uses OpenRouter's `/api/v1/videos` endpoint with a **submit-and-poll** pattern orchestrated by a background worker:
1. User submits a video request via `POST /generate/video` (or `/generate/video/from-image`)
2. Backend inserts a row into `generated_videos` with `status: "queued"` and returns immediately
3. Background worker (`video_worker.py`) picks up queued jobs every 15 seconds:
- Calls `POST /api/v1/videos` with `model`, `prompt`, `aspect_ratio`, `resolution`, `duration`
- Receives `{"id": "job_id", "polling_url": "https://..."}` and updates DB to `status: "processing"`
- Polls `GET polling_url` every 15 seconds until `status` is `"completed"` or `"failed"`
- Updates DB with final status, `video_url`, and any `error` message
4. Frontend polls `GET /generate/video/{db_id}/status` every 5 seconds to show live updates
5. Completed response includes `video_url` — the video is displayed in a `<video>` element
Supported models: `openai/sora-2-pro`, `google/veo-3.1-fast`. Both text-to-video and image-to-video use the same `/api/v1/videos` endpoint (image-to-video includes `frame_images` with `first_frame` in the request body).
+46
View File
@@ -0,0 +1,46 @@
# DuckDB Concurrency and Storage
## Single Writer Per Process
DuckDB allows only one process to open the database file in read-write mode at a time. The FastAPI backend must be run with a single worker (`uvicorn --workers 1`). Running multiple workers against the same DuckDB file will cause startup errors.
## asyncio.Lock for Writes
All database write operations (`INSERT`, `UPDATE`, `DELETE`) in the FastAPI async context are wrapped in a single `asyncio.Lock` (`get_write_lock()` from `backend/app/db.py`). This prevents concurrent coroutines from issuing overlapping writes within the single process, which would otherwise raise DuckDB optimistic concurrency errors.
Read operations (`SELECT`) do not require the lock — DuckDB's MVCC provides consistent read snapshots.
## Schema
```sql
CREATE TABLE users (
id UUID DEFAULT uuid() PRIMARY KEY,
email VARCHAR NOT NULL UNIQUE,
password_hash VARCHAR NOT NULL,
role VARCHAR DEFAULT 'user',
created_at TIMESTAMP DEFAULT now(),
updated_at TIMESTAMP DEFAULT now()
);
CREATE TABLE refresh_tokens (
jti UUID DEFAULT uuid() PRIMARY KEY,
user_id UUID NOT NULL, -- soft FK to users.id
issued_at TIMESTAMP DEFAULT now(),
expires_at TIMESTAMP NOT NULL,
revoked BOOLEAN DEFAULT false
);
```
> The `REFERENCES users(id)` foreign key is intentionally omitted from `refresh_tokens`. DuckDB fires FK checks on `UPDATE` of the parent table (including email changes), causing false constraint violations. Referential integrity is enforced manually: deleting a user also deletes their refresh tokens in the same write transaction.
## Access Tokens
Access tokens are **stateless** JWTs — not stored in the database. They are validated by signature and expiry claim only. The short TTL (15 minutes) limits the blast radius if a token is leaked.
## Refresh Tokens
Refresh tokens store a JTI (JWT ID) UUID in the `refresh_tokens` table. On each use the old JTI is revoked and a new one issued (rotation). On logout the JTI is immediately revoked. Expired and revoked tokens can be purged via `POST /admin/tokens/purge`.
## Future: AI Generation History
AI generation metadata (model, prompt, cost, result URLs) can be stored as JSON columns in a future `generation_history` table in DuckDB, enabling per-user analytics and usage dashboards at zero extra infrastructure cost.
+2 -187
View File
@@ -29,7 +29,7 @@ Coolify's built-in reverse proxy routes traffic:
3. Select the `ai.allucanget.biz` repository
4. Choose the `main` branch
5. Set **Build Pack** to `nixpacks`
6. **CRITICAL: Set Base Directory to `/backend`** this tells Nixpacks to look in the `backend/` subdirectory for `requirements.txt` and the Python application
6. Set **Base Directory** to `/backend` - this tells Nixpacks to look in the `backend/` subdirectory for `requirements.txt` and the Python application
7. Set **Ports Exposed** to `12015`
8. Set **Start Command** to:
@@ -59,7 +59,7 @@ Add these as **Runtime** environment variables in Coolify:
2. Select the same repository
3. Choose the `main` branch
4. Set **Build Pack** to `nixpacks`
5. **CRITICAL: Set Base Directory to `/frontend`** — this tells Nixpacks to look in the `frontend/` subdirectory for `requirements.txt` and the Python application
5. Set **Base Directory** to `/frontend` - this tells Nixpacks to look in the `frontend/` subdirectory for `requirements.txt` and the Python application
6. Set **Ports Exposed** to `12016`
7. Set **Start Command** to:
@@ -173,188 +173,3 @@ All required environment variables:
- [ ] Domain names configured
- [ ] Health checks passing
- [ ] Logs reviewed for errors
1. In Coolify, click **Add Resource** → **Deploy a new resource** → **Git**
2. Connect your Git repository (`git.allucanget.biz`)
3. Select the `ai.allucanget.biz` repository
4. Choose the `main` branch
5. Set **Build Pack** to `nixpacks`
6. **CRITICAL: Set Base Directory to `/backend`** — this tells Nixpacks to look in the `backend/` subdirectory for `requirements.txt` and the Python application
7. Set **Ports Exposed** to `12015`
8. Set **Start Command** to:
```txt
uvicorn app.main:app --host 0.0.0.0 --port 12015
```
9. Click **Create Resource**
> **Important:** Nixpacks copies the **contents** of the Base Directory to `/app/` in the container. When Base Directory is `/backend`, the `backend/` folder wrapper is removed — only `app/`, `tests/`, and `requirements.txt` are copied. Therefore the start command uses `app.main:app` (not `backend.app.main:app`).
### Backend Environment Variables
Add these as **Runtime** environment variables in Coolify:
| Variable | Description | Example |
| -------------------- | ------------------------------------ | ------------------------------------ |
| `OPENROUTER_API_KEY` | OpenRouter API key for AI generation | `sk-or-v1-...` |
| `JWT_SECRET` | Secret key for JWT token signing | Generate with `openssl rand -hex 32` |
| `APP_URL` | Public URL of the backend | `https://api.ai.allucanget.biz` |
| `APP_NAME` | Application name | `All You Can GET AI` |
| `CORS_ORIGINS` | Comma-separated allowed origins | `https://ai.allucanget.biz` |
## Step 2: Create Frontend Service
1. In Coolify, click **Add Resource** → **Deploy a new resource** → **Git**
2. Select the same repository
3. Choose the `main` branch
4. Set **Build Pack** to `nixpacks`
5. **CRITICAL: Set Base Directory to `/frontend`** — this tells Nixpacks to look in the `frontend/` subdirectory for `requirements.txt` and the Python application
6. Set **Ports Exposed** to `12016`
7. Set **Start Command** to:
```txt
gunicorn app.main:app --bind 0.0.0.0:12016 --workers 2 --timeout 120
```
8. Click **Create Resource**
> **Note:** The frontend uses `requirements.txt` for production dependencies and `requirements-dev.txt` for development dependencies (like pytest). Nixpacks will automatically detect and install only the production dependencies.
> **Important:** Nixpacks copies the **contents** of the Base Directory to `/app/` in the container. When Base Directory is `/frontend`, the `frontend/` folder wrapper is removed — only `app/`, `tests/`, and `requirements.txt` are copied. Therefore the start command uses `app.main:app` (not `frontend.app.main:app`).
### Frontend Environment Variables
Add these as **Runtime** environment variables in Coolify:
| Variable | Description | Example |
| ------------------ | ----------------------------------------- | --------------------------------------------------------------- |
| `FLASK_SECRET_KEY` | Flask session cookie signing key | Generate with `openssl rand -hex 32` |
| `BACKEND_URL` | Internal URL to reach the backend service | `http://localhost:12015` (or use Coolify's internal networking) |
## Step 3: Configure Reverse Proxy
Coolify provides a built-in reverse proxy. Configure routing rules:
### Backend Proxy Rules
- **Domain**: `api.ai.allucanget.biz` (or subdomain of your choice)
- **Port**: `12015`
- **Path**: `/api/*` → forward to backend
### Frontend Proxy Rules
- **Domain**: `ai.allucanget.biz`
- **Port**: `12016`
- **Path**: `/` → forward to frontend
### Nginx Configuration (Optional)
If you need custom Nginx configuration, create `nginx/coolify.conf`:
```nginx
# Reverse proxy configuration for Coolify
# This file is for reference — Coolify's built-in proxy handles routing
# Backend API proxy
location /api/ {
proxy_pass http://backend:12015;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Frontend proxy
location / {
proxy_pass http://frontend:12016;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
```
## Step 4: SSL/TLS
Enable HTTPS in Coolify for both services:
1. Go to each service's settings
2. Enable **Auto HTTPS** (Let's Encrypt)
3. Configure domain names
4. Coolify automatically handles certificate renewal
## Step 5: Persistent Storage (Optional)
If you want to persist DuckDB data:
1. In Coolify, go to the **Backend** service
2. Navigate to **Persistent Storage**
3. Add a volume mount:
- **Host Path**: `/data` (or any path on the host)
- **Container Path**: `/app/data`
- **Type**: `Bind Mount` or `Volume`
## Troubleshooting
### Docker Compose deployment fails in Coolify
- Verify Coolify uses `docker-compose.coolify.yml`, not local `docker-compose.yml`
- Verify public domain points to `frontend` service on port `12016`
- Do not add `nginx` to the Coolify stack — bind-mounting a local config file will fail since the file doesn't exist on the Coolify server
### Backend healthcheck stays unhealthy
- Check backend logs in Coolify
- Verify `OPENROUTER_API_KEY` and `JWT_SECRET` are set
- Verify volume mount at `/app/data` is writable
### Backend won't start
- Check that `OPENROUTER_API_KEY` is set
- Verify `JWT_SECRET` is a sufficiently long random string
- Check logs in Coolify's **Logs** tab
### Frontend can't reach backend
- Ensure `BACKEND_URL` points to the correct internal URL
- If both services are on the same Coolify server, use `http://localhost:12015`
- Check that the backend service is running and healthy
### CORS errors
- Set `CORS_ORIGINS` to include your frontend domain
- Example: `https://ai.allucanget.biz`
### Nixpacks build fails
- Verify the base directory is correct (`/backend` or `/frontend`)
- Check that `requirements.txt` exists in the base directory
- Review build logs in Coolify
## Environment Variable Summary
All required environment variables:
| Variable | Service | Required |
| -------------------- | -------- | ------------------------------------- |
| `OPENROUTER_API_KEY` | Backend | Yes |
| `JWT_SECRET` | Backend | Yes |
| `APP_URL` | Backend | Yes |
| `APP_NAME` | Backend | No (defaults to "All You Can GET AI") |
| `CORS_ORIGINS` | Backend | Yes |
| `FLASK_SECRET_KEY` | Frontend | Yes |
| `BACKEND_URL` | Frontend | Yes |
## Deployment Checklist
- [ ] Repository pushed to Git
- [ ] For Docker Compose: Coolify resource uses `docker-compose.coolify.yml`
- [ ] For Docker Compose: domain points to `frontend` service on port `12016`
- [ ] Backend service created with correct base directory (`/backend`)
- [ ] Backend environment variables configured
- [ ] Frontend service created with correct base directory (`/frontend`)
- [ ] Frontend environment variables configured
- [ ] SSL certificates enabled
- [ ] Domain names configured
- [ ] Health checks passing
- [ ] Logs reviewed for errors
+71 -4
View File
@@ -217,11 +217,17 @@ def dashboard():
images = img_resp.json() if img_resp.status_code == 200 else []
gen_resp = _api("GET", "/generate/images", token=token)
generated_images = gen_resp.json() if gen_resp.status_code == 200 else []
vid_resp = _api("GET", "/generate/videos", token=token)
generated_videos = vid_resp.json() if vid_resp.status_code == 200 else []
videos = vid_resp.json() if vid_resp.status_code == 200 else []
pending_videos = [v for v in videos if v.get(
"status") not in ("completed", "failed")]
completed_videos = [v for v in videos if v.get("status") == "completed"]
return render_template("dashboard.html", user=user, images=images,
generated_images=generated_images,
generated_videos=generated_videos)
pending_videos=pending_videos,
completed_videos=completed_videos)
@app.get("/gallery")
@@ -405,7 +411,7 @@ def generate_image():
@app.route("/generate/video", methods=["GET", "POST"])
@login_required
def generate_video():
result = error = None
error = None
token = session["access_token"]
if request.method == "POST":
mode = request.form.get("mode", "text")
@@ -413,6 +419,7 @@ def generate_video():
duration = int(
duration_raw) if duration_raw.strip().isdigit() else None
resolution = request.form.get("resolution", "").strip() or None
if mode == "image":
resp = _api("POST", "/generate/video/from-image", token=token, json={
"model": request.form.get("model", "").strip(),
@@ -430,12 +437,21 @@ def generate_video():
"duration_seconds": duration,
"resolution": resolution,
})
if resp.status_code == 200:
result = resp.json()
# On success, redirect to the detail page to monitor progress
db_id = result.get("db_id")
if db_id:
return redirect(url_for("video_detail", video_id=db_id))
# Fallback for older backend versions
flash("Video job started.", "success")
return redirect(url_for("gallery"))
else:
error = resp.json().get("detail", "Generation failed.")
models = _load_models(token, "video")
return render_template("generate_video.html", result=result, error=error, models=models)
return render_template("generate_video.html", error=error, models=models)
@app.get("/generate/video/status")
@@ -453,6 +469,24 @@ def generate_video_status():
return jsonify(resp.json()), resp.status_code
@app.get("/generate/video/<video_id>/status")
@login_required
def generate_video_db_status(video_id: str):
"""Return current DB status for a video job (polled by frontend JS)."""
resp = _api(
"GET", f"/generate/videos/{video_id}", token=session["access_token"])
return jsonify(resp.json()), resp.status_code
@app.post("/generate/video/<video_id>/cancel")
@login_required
def cancel_video_job(video_id: str):
"""Proxy cancel request to backend."""
resp = _api(
"POST", f"/generate/videos/{video_id}/cancel", token=session["access_token"])
return jsonify(resp.json()), resp.status_code
# ── Admin ─────────────────────────────────────────────────────────────────
@app.get("/admin")
@@ -491,6 +525,39 @@ def admin_models():
return render_template("admin/models.html")
# ── Admin API proxies (same-origin for browser JS, avoids mixed-content) ──
@app.get("/api/admin/videos")
@admin_required
def api_admin_list_videos():
resp = _api("GET", "/admin/videos", token=session["access_token"])
return jsonify(resp.json()), resp.status_code
@app.post("/api/admin/videos/<job_id>/retry")
@admin_required
def api_admin_retry_video(job_id: str):
resp = _api(
"POST", f"/admin/videos/{job_id}/retry", token=session["access_token"])
return jsonify(resp.json()), resp.status_code
@app.post("/api/admin/videos/<job_id>/cancel")
@admin_required
def api_admin_cancel_video(job_id: str):
resp = _api(
"POST", f"/admin/videos/{job_id}/cancel", token=session["access_token"])
return jsonify(resp.json()), resp.status_code
@app.delete("/api/admin/videos/<job_id>")
@admin_required
def api_admin_delete_video(job_id: str):
resp = _api(
"DELETE", f"/admin/videos/{job_id}", token=session["access_token"])
return jsonify(resp.json()), resp.status_code
# ── Profile ───────────────────────────────────────────────────────────────
@app.route("/users/profile", methods=["GET", "POST"])
+79 -11
View File
@@ -63,15 +63,75 @@ document.addEventListener("DOMContentLoaded", () => {
// ── Video status polling ───────────────────────────────
const pollDiv = document.getElementById("video-poll-status");
if (pollDiv) {
const pollingUrl = pollDiv.dataset.pollingUrl;
const videoId = pollDiv.dataset.videoId;
const statusText = document.getElementById("poll-status-text");
const videoContainer = document.getElementById("poll-video-container");
const cancelBtn = document.getElementById("cancel-video-btn");
const cancelMsg = document.getElementById("cancel-msg");
const MAX_POLLS = 120; // ~10 minutes at 5s interval
let pollCount = 0;
let interval = null;
const interval = setInterval(async () => {
const stopPolling = () => {
if (interval) {
clearInterval(interval);
interval = null;
}
};
if (cancelBtn) {
cancelBtn.addEventListener("click", async () => {
cancelBtn.disabled = true;
cancelBtn.textContent = "Cancelling…";
try {
const resp = await fetch(
"/generate/video/status?polling_url=" +
encodeURIComponent(pollingUrl),
"/generate/video/" + encodeURIComponent(videoId) + "/cancel",
{ method: "POST" },
);
if (resp.ok) {
stopPolling();
cancelBtn.classList.add("hidden");
if (cancelMsg) {
cancelMsg.textContent = "Job cancelled.";
cancelMsg.classList.remove("hidden", "text-red-500");
cancelMsg.classList.add("text-gray-300");
}
if (statusText) {
statusText.innerHTML = "Status: <strong>cancelled</strong>";
}
} else {
const data = await resp.json().catch(() => ({}));
cancelBtn.disabled = false;
cancelBtn.textContent = "Cancel Job";
if (cancelMsg) {
cancelMsg.textContent = data.detail || "Cancel failed.";
cancelMsg.classList.remove("hidden");
cancelMsg.classList.add("text-red-500");
}
}
} catch (e) {
cancelBtn.disabled = false;
cancelBtn.textContent = "Cancel Job";
if (cancelMsg) {
cancelMsg.textContent = "Network error.";
cancelMsg.classList.remove("hidden");
cancelMsg.classList.add("text-red-500");
}
}
});
}
interval = setInterval(async () => {
try {
pollCount++;
if (pollCount > MAX_POLLS) {
stopPolling();
pollDiv.innerHTML =
'<div class="alert alert-warning">Polling timed out. Please refresh the page to check status.</div>';
return;
}
const resp = await fetch(
"/generate/video/" + encodeURIComponent(videoId) + "/status",
);
if (!resp.ok) return;
const data = await resp.json();
@@ -81,8 +141,9 @@ document.addEventListener("DOMContentLoaded", () => {
}
if (data.status === "completed") {
clearInterval(interval);
if (data.video_url && videoContainer) {
stopPolling();
if (data.video_url) {
if (videoContainer) {
const vid = document.createElement("video");
vid.src = data.video_url;
vid.controls = true;
@@ -90,17 +151,24 @@ document.addEventListener("DOMContentLoaded", () => {
videoContainer.appendChild(vid);
const msg = pollDiv.querySelector("p");
if (msg) msg.textContent = "Video ready!";
} else {
// video_detail page: reload to show the video element
window.location.reload();
}
}
} else if (data.status === "failed") {
clearInterval(interval);
stopPolling();
pollDiv.innerHTML =
'<div class="alert alert-error">Generation failed: ' +
(data.error || "Unknown error") +
"</div>";
'<div class="alert alert-error">Generation failed.</div>';
} else if (data.status === "cancelled") {
stopPolling();
if (cancelBtn) cancelBtn.classList.add("hidden");
pollDiv.innerHTML =
'<div class="alert alert-info">Job was cancelled.</div>';
}
} catch (e) {
console.error("Video polling error:", e);
}
}, 12016);
}, 5000);
}
});
+6 -2
View File
@@ -139,11 +139,15 @@ nav {
/* ─── Main layout ──────────────────────────────────────── */
main {
max-width: 800px;
max-width: 1200px;
margin: 2rem auto;
padding: 0 1rem;
}
main:has(.admin-page) {
max-width: 1200px;
}
/* ─── Alerts ───────────────────────────────────────────── */
.alert {
padding: 0.75rem 1rem;
@@ -615,7 +619,7 @@ main {
/* Card */
.card {
background: #fff;
background: rgba(255, 255, 255, 0.08);
border-radius: 10px;
padding: 2rem;
box-shadow: 0 1px 4px rgba(0, 0, 0, 0.08);
+200 -1
View File
@@ -1,6 +1,6 @@
{% extends "base.html" %} {% block title %}Admin — All You Can GET AI{% endblock
%} {% block content %}
<div class="card">
<div class="card admin-page">
<h1>Admin Dashboard</h1>
{% if stats %}
@@ -76,5 +76,204 @@
</tbody>
</table>
</div>
<!-- ── Video Jobs ──────────────────────────────────────────────── -->
<h2 class="section-title" style="margin-top: 2rem">Video Jobs</h2>
<div
style="
display: flex;
gap: 1rem;
align-items: center;
flex-wrap: wrap;
margin-bottom: 1rem;
"
>
<label for="vj-status-filter" style="font-weight: 600"
>Filter by status:</label
>
<select id="vj-status-filter" class="form-control" style="width: auto">
<option value="">All</option>
<option value="queued">Queued</option>
<option value="processing">Processing</option>
<option value="completed">Completed</option>
<option value="failed">Failed</option>
<option value="cancelled">Cancelled</option>
</select>
<label for="vj-sort" style="font-weight: 600">Sort:</label>
<select id="vj-sort" class="form-control" style="width: auto">
<option value="created_desc">Created (newest first)</option>
<option value="created_asc">Created (oldest first)</option>
<option value="updated_desc">Updated (newest first)</option>
<option value="status_asc">Status (AZ)</option>
<option value="model_asc">Model (AZ)</option>
</select>
<button id="vj-refresh" class="btn btn-sm">Refresh</button>
<span
id="vj-count"
style="color: var(--text-muted, #888); font-size: 0.9em"
></span>
</div>
<div class="table-wrap">
<table id="vj-table">
<thead>
<tr>
<th>User</th>
<th>Status</th>
<th>Model</th>
<th>Prompt</th>
<th>Created</th>
<th>Updated</th>
<th>Actions</th>
</tr>
</thead>
<tbody id="vj-tbody">
<tr>
<td colspan="7" class="text-muted">Loading…</td>
</tr>
</tbody>
</table>
</div>
</div>
<script>
(function () {
let allJobs = [];
async function loadJobs() {
document.getElementById("vj-tbody").innerHTML =
'<tr><td colspan="7" class="text-muted">Loading…</td></tr>';
try {
const r = await fetch("/api/admin/videos");
if (!r.ok) throw new Error(await r.text());
allJobs = await r.json();
renderJobs();
} catch (e) {
document.getElementById("vj-tbody").innerHTML =
`<tr><td colspan="7" style="color:red;">Error: ${e.message}</td></tr>`;
}
}
function renderJobs() {
const statusFilter = document.getElementById("vj-status-filter").value;
const sort = document.getElementById("vj-sort").value;
let jobs = statusFilter
? allJobs.filter((j) => j.status === statusFilter)
: [...allJobs];
jobs.sort((a, b) => {
if (sort === "created_asc")
return new Date(a.created_at) - new Date(b.created_at);
if (sort === "updated_desc")
return new Date(b.updated_at) - new Date(a.updated_at);
if (sort === "status_asc") return a.status.localeCompare(b.status);
if (sort === "model_asc") return a.model_id.localeCompare(b.model_id);
return new Date(b.created_at) - new Date(a.created_at); // created_desc default
});
document.getElementById("vj-count").textContent =
`${jobs.length} job${jobs.length !== 1 ? "s" : ""}`;
const tbody = document.getElementById("vj-tbody");
if (jobs.length === 0) {
tbody.innerHTML =
'<tr><td colspan="7" class="text-muted">No jobs found.</td></tr>';
return;
}
const statusColor = {
completed: "color:var(--success-color,#4caf50)",
failed: "color:var(--danger-color,#e53935)",
cancelled: "color:var(--danger-color,#e53935)",
processing: "color:var(--warning-color,#fb8c00)",
queued: "color:var(--warning-color,#fb8c00)",
};
tbody.innerHTML = jobs
.map((job) => {
const sc = statusColor[job.status] || "";
const canRetry =
job.status === "failed" || job.status === "cancelled";
const canCancel =
job.status === "queued" || job.status === "processing";
const actions = [
canRetry
? `<button class="btn btn-sm vj-retry" data-id="${job.id}">Retry</button>`
: "",
canCancel
? `<button class="btn btn-sm vj-cancel" data-id="${job.id}">Cancel</button>`
: "",
`<button class="btn btn-sm btn-danger vj-delete" data-id="${job.id}">Delete</button>`,
].join(" ");
const prompt =
job.prompt.length > 60 ? job.prompt.slice(0, 57) + "…" : job.prompt;
const created = job.created_at
? new Date(job.created_at).toLocaleString()
: "—";
const updated = job.updated_at
? new Date(job.updated_at).toLocaleString()
: "—";
return `<tr>
<td>${job.user_email || "—"}</td>
<td style="${sc};font-weight:600;">${job.status}</td>
<td style="font-size:.85em;">${job.model_id}</td>
<td title="${job.prompt.replace(/"/g, "&quot;")}">${prompt}</td>
<td style="white-space:nowrap;">${created}</td>
<td style="white-space:nowrap;">${updated}</td>
<td style="white-space:nowrap;">${actions}</td>
</tr>`;
})
.join("");
}
async function apiPost(path) {
const r = await fetch(path, { method: "POST" });
if (!r.ok) {
const d = await r.json().catch(() => ({}));
throw new Error(d.detail || r.statusText);
}
return r.json();
}
async function apiDelete(path) {
const r = await fetch(path, { method: "DELETE" });
if (!r.ok) {
const d = await r.json().catch(() => ({}));
throw new Error(d.detail || r.statusText);
}
return r.json();
}
document
.getElementById("vj-tbody")
.addEventListener("click", async function (e) {
const btn = e.target.closest("button");
if (!btn) return;
const id = btn.dataset.id;
try {
if (btn.classList.contains("vj-retry"))
await apiPost(`/api/admin/videos/${id}/retry`);
if (btn.classList.contains("vj-cancel"))
await apiPost(`/api/admin/videos/${id}/cancel`);
if (btn.classList.contains("vj-delete")) {
if (!confirm("Permanently delete this video job?")) return;
await apiDelete(`/api/admin/videos/${id}`);
}
await loadJobs();
} catch (err) {
alert("Error: " + err.message);
}
});
document
.getElementById("vj-status-filter")
.addEventListener("change", renderJobs);
document.getElementById("vj-sort").addEventListener("change", renderJobs);
document.getElementById("vj-refresh").addEventListener("click", loadJobs);
loadJobs();
})();
</script>
{% endblock %}
+182
View File
@@ -0,0 +1,182 @@
{% extends "base.html" %} {% block title %}Admin - Video Jobs{% endblock %} {%
block content %}
<div class="container mx-auto px-4 py-8">
<h1 class="text-3xl font-bold mb-6">Admin: Video Jobs</h1>
<!-- Purge Old Jobs -->
<div class="bg-gray-800 p-4 rounded-lg shadow-md mb-6">
<h2 class="text-xl font-semibold mb-2">Maintenance</h2>
<p class="text-gray-400 mb-4">
Delete all completed, failed, or cancelled jobs older than 30 days.
</p>
<button
id="purge-button"
class="bg-red-500 hover:bg-red-700 text-white font-bold py-2 px-4 rounded"
>
Purge Old Jobs
</button>
<p id="purge-status" class="mt-2 text-sm"></p>
</div>
<!-- Video Jobs Table -->
<div class="bg-gray-800 p-4 rounded-lg shadow-md overflow-x-auto">
<table class="min-w-full divide-y divide-gray-700">
<thead class="bg-gray-700">
<tr>
<th
scope="col"
class="px-4 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
>
User
</th>
<th
scope="col"
class="px-4 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
>
Status
</th>
<th
scope="col"
class="px-4 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
>
Model
</th>
<th
scope="col"
class="px-4 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
>
Prompt
</th>
<th
scope="col"
class="px-4 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
>
Created
</th>
<th
scope="col"
class="px-4 py-3 text-left text-xs font-medium text-gray-300 uppercase tracking-wider"
>
Actions
</th>
</tr>
</thead>
<tbody id="jobs-table-body" class="bg-gray-800 divide-y divide-gray-700">
<tr>
<td colspan="6" class="text-center py-4">Loading jobs...</td>
</tr>
</tbody>
</table>
</div>
</div>
<script>
document.addEventListener("DOMContentLoaded", function () {
const jobsTableBody = document.getElementById("jobs-table-body");
const purgeButton = document.getElementById("purge-button");
const purgeStatus = document.getElementById("purge-status");
async function fetchJobs() {
try {
const response = await fetch(
"{{ config['BACKEND_URL'] }}/admin/videos",
{
headers: {
Authorization: "Bearer {{ session['access_token'] }}",
},
},
);
if (!response.ok) throw new Error("Failed to fetch jobs");
const jobs = await response.json();
jobsTableBody.innerHTML = "";
if (jobs.length === 0) {
jobsTableBody.innerHTML =
'<tr><td colspan="6" class="text-center py-4">No video jobs found.</td></tr>';
} else {
jobs.forEach((job) => {
const statusClass =
job.status === "completed"
? "text-green-400"
: job.status === "failed" || job.status === "cancelled"
? "text-red-400"
: "text-yellow-400";
const cancelBtn =
job.status === "queued" || job.status === "processing"
? `<button class="cancel-btn text-red-400 hover:text-red-600 text-sm" data-job-id="${job.id}">Cancel</button>`
: "";
const row = `
<tr>
<td class="px-4 py-3 whitespace-nowrap text-sm">${job.user_email || "Unknown"}</td>
<td class="px-4 py-3 whitespace-nowrap text-sm font-semibold ${statusClass}">${job.status}</td>
<td class="px-4 py-3 whitespace-nowrap text-sm">${job.model_id}</td>
<td class="px-4 py-3 text-sm truncate max-w-xs">${job.prompt}</td>
<td class="px-4 py-3 whitespace-nowrap text-sm">${new Date(job.created_at).toLocaleString()}</td>
<td class="px-4 py-3 whitespace-nowrap text-sm">${cancelBtn}</td>
</tr>
`;
jobsTableBody.innerHTML += row;
});
}
} catch (error) {
jobsTableBody.innerHTML =
'<tr><td colspan="6" class="text-center py-4 text-red-500">Error loading jobs.</td></tr>';
console.error("Error fetching jobs:", error);
}
}
async function purgeJobs() {
purgeButton.disabled = true;
purgeStatus.textContent = "Purging...";
purgeStatus.classList.remove("text-red-500", "text-green-500");
try {
const response = await fetch(
"{{ config['BACKEND_URL'] }}/admin/videos/purge",
{
method: "POST",
headers: {
Authorization: "Bearer {{ session['access_token'] }}",
},
},
);
const data = await response.json();
if (!response.ok)
throw new Error(data.detail || "Failed to purge jobs");
purgeStatus.textContent = `Purged ${data.deleted} jobs. ${data.remaining} remaining.`;
purgeStatus.classList.add("text-green-500");
fetchJobs();
} catch (error) {
purgeStatus.textContent = `Error: ${error.message}`;
purgeStatus.classList.add("text-red-500");
} finally {
purgeButton.disabled = false;
}
}
// Cancel button event delegation
jobsTableBody.addEventListener("click", async function (e) {
if (e.target.classList.contains("cancel-btn")) {
const jobId = e.target.dataset.jobId;
try {
const response = await fetch(
`{{ config['BACKEND_URL'] }}/admin/videos/${jobId}/cancel`,
{
method: "POST",
headers: {
Authorization: "Bearer {{ session['access_token'] }}",
},
},
);
if (!response.ok) throw new Error("Failed to cancel job");
fetchJobs();
} catch (error) {
alert(`Error: ${error.message}`);
}
}
});
purgeButton.addEventListener("click", purgeJobs);
fetchJobs();
});
</script>
{% endblock %}
+1
View File
@@ -8,6 +8,7 @@
rel="stylesheet"
href="{{ url_for('static', filename='style.css') }}"
/>
<script src="https://cdn.tailwindcss.com"></script>
</head>
<body>
<header>
+45 -9
View File
@@ -6,12 +6,42 @@ endblock %} {% block content %}
<a href="{{ url_for('generate') }}" class="btn">Start generating</a>
</div>
{% if generated_images %}
{% if pending_videos %}
<div class="card mt-2">
<h2>Pending Video Jobs</h2>
<div class="image-grid">
{% for vid in pending_videos %}
<a
href="{{ url_for('video_detail', video_id=vid.id) }}"
class="image-grid-item"
>
<div
style="
background: #1a1a1a;
border-radius: 6px;
padding: 2rem;
text-align: center;
"
>
<span class="text-muted">{{ vid.status | capitalize }} &hellip;</span>
</div>
<p class="text-muted" style="font-size: 0.75rem; margin-top: 0.25rem">
<strong>{{ vid.model_id }}</strong><br />{{ vid.prompt[:80] }}{% if
vid.prompt|length > 80 %}…{% endif %}
</p>
</a>
{% endfor %}
</div>
</div>
{% endif %} {% if generated_images %}
<div class="card mt-2">
<h2>Generated images</h2>
<div class="image-grid">
{% for img in generated_images %}
<div class="image-grid-item">
<a
href="{{ url_for('image_detail', image_id=img.id) }}"
class="image-grid-item"
>
<img
src="{{ img.image_data }}"
alt="{{ img.prompt }}"
@@ -22,16 +52,19 @@ endblock %} {% block content %}
<strong>{{ img.model_id }}</strong><br />{{ img.prompt[:80] }}{% if
img.prompt|length > 80 %}…{% endif %}
</p>
</div>
</a>
{% endfor %}
</div>
</div>
{% endif %} {% if generated_videos %}
{% endif %} {% if completed_videos %}
<div class="card mt-2">
<h2>Generated videos</h2>
<div class="image-grid">
{% for vid in generated_videos %}
<div class="image-grid-item">
{% for vid in completed_videos %}
<a
href="{{ url_for('video_detail', video_id=vid.id) }}"
class="image-grid-item"
>
{% if vid.video_url %}
<video controls style="max-width: 100%; border-radius: 6px">
<source src="{{ vid.video_url }}" />
@@ -54,7 +87,7 @@ endblock %} {% block content %}
vid.prompt|length > 80 %}…{% endif %}<br />
<em>{{ vid.status }}</em>
</p>
</div>
</a>
{% endfor %}
</div>
</div>
@@ -63,7 +96,10 @@ endblock %} {% block content %}
<h2>Uploaded reference images</h2>
<div class="image-grid">
{% for img in images %}
<div class="image-grid-item">
<a
href="{{ url_for('upload_detail', image_id=img.id) }}"
class="image-grid-item"
>
<img
src="{{ url_for('serve_uploaded_image', image_id=img.id) }}"
alt="{{ img.filename }}"
@@ -73,7 +109,7 @@ endblock %} {% block content %}
<p class="text-muted" style="font-size: 0.75rem; margin-top: 0.25rem">
{{ img.filename }} &mdash; {{ (img.size_bytes / 1024) | round(1) }} KB
</p>
</div>
</a>
{% endfor %}
</div>
</div>
+147 -6
View File
@@ -1,5 +1,10 @@
{% extends "base.html" %} {% block title %}My Gallery{% endblock %} {% block
content %}
<div
class="container mx-auto px-4 py-8"
data-current-page="1"
data-per-page="12"
>
<div class="container mx-auto px-4 py-8">
<h1 class="text-3xl font-bold mb-6">My Gallery</h1>
@@ -13,10 +18,11 @@ content %}
class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4 gap-6"
>
{% for video in pending_videos %}
<a
href="{{ url_for('video_detail', video_id=video.id) }}"
class="block bg-gray-800 rounded-lg shadow-lg overflow-hidden hover:shadow-2xl transition-shadow duration-300"
<div
class="block bg-gray-800 rounded-lg shadow-lg overflow-hidden hover:shadow-2xl transition-shadow duration-300 relative"
data-pending-video-id="{{ video.id }}"
>
<a href="{{ url_for('video_detail', video_id=video.id) }}">
<div class="p-4">
<p class="font-bold text-lg truncate">{{ video.prompt }}</p>
<p class="text-sm text-gray-400">
@@ -30,6 +36,16 @@ content %}
</p>
</div>
</a>
<div class="px-4 pb-4">
<button
class="cancel-pending-btn px-3 py-1 bg-red-600 hover:bg-red-700 text-white rounded text-xs"
data-video-id="{{ video.id }}"
>
Cancel
</button>
<span class="cancel-pending-msg text-xs ml-2 hidden"></span>
</div>
</div>
{% endfor %}
</div>
</div>
@@ -55,7 +71,13 @@ content %}
class="w-full h-48 object-cover"
/>
<div class="p-4">
<p class="text-sm truncate">{{ image.prompt }}</p>
<p class="font-bold text-sm truncate">{{ image.prompt }}</p>
<p class="text-xs text-gray-400 mt-1">
Image ID: {{ image.id[:8] }}...
</p>
<p class="text-xs text-gray-500 mt-1">
{{ image.created_at | fromisoformat | humantime }}
</p>
</div>
</a>
{% endfor %}
@@ -86,6 +108,13 @@ content %}
href="{{ url_for('video_detail', video_id=video.id) }}"
class="block bg-gray-800 rounded-lg shadow-lg overflow-hidden hover:shadow-2xl transition-shadow duration-300"
>
{% if video.video_url %}
<img
src="{{ video.video_url }}#t=0.1"
alt="{{ video.prompt }}"
class="w-full h-48 object-cover"
/>
{% else %}
<div class="w-full h-48 bg-black flex items-center justify-center">
<svg
class="w-12 h-12 text-gray-500"
@@ -108,8 +137,15 @@ content %}
></path>
</svg>
</div>
{% endif %}
<div class="p-4">
<p class="text-sm truncate">{{ video.prompt }}</p>
<p class="font-bold text-sm truncate">{{ video.prompt }}</p>
<p class="text-xs text-gray-400 mt-1">
Video ID: {{ video.id[:8] }}...
</p>
<p class="text-xs text-gray-500 mt-1">
{{ video.created_at | fromisoformat | humantime }}
</p>
</div>
</a>
{% endfor %}
@@ -146,7 +182,13 @@ content %}
class="w-full h-48 object-cover"
/>
<div class="p-4">
<p class="text-sm truncate">{{ image.filename }}</p>
<p class="font-bold text-sm truncate">{{ image.filename }}</p>
<p class="text-xs text-gray-400 mt-1">
Upload ID: {{ image.id[:8] }}...
</p>
<p class="text-xs text-gray-500 mt-1">
{{ image.uploaded_at | fromisoformat | humantime }}
</p>
</div>
</a>
{% endfor %}
@@ -156,4 +198,103 @@ content %}
{% endif %}
</div>
</div>
<!-- Infinite Scroll Loading Indicator -->
<div id="loading-indicator" class="flex justify-center py-8 hidden">
<div class="spinner"></div>
</div>
{% endblock %} {% block scripts %}
<script>
document.addEventListener("DOMContentLoaded", function () {
const galleryContainers = document.querySelectorAll(".grid[data-grid]");
const loadingIndicator = document.getElementById("loading-indicator");
const container = document.querySelector(".container[data-current-page]");
const currentPage = parseInt(container.dataset.currentPage);
const perPage = parseInt(container.dataset.perPage);
let isLoading = false;
let hasMore = true;
// Add data-grid attribute to all gallery grids
document
.querySelectorAll(".grid")
.forEach((grid) => grid.setAttribute("data-grid", ""));
// Infinite scroll handler
window.addEventListener("scroll", async function () {
if (!hasMore || isLoading) return;
const scrollPosition = window.innerHeight + window.scrollY;
const bottomThreshold = document.body.offsetHeight - 1000;
if (scrollPosition >= bottomThreshold) {
isLoading = true;
loadingIndicator.classList.remove("hidden");
// TODO: Implement actual fetching of next page of results and appending to the correct grid(s)
// For demo purposes, we'll just simulate a delay and then hide the loading indicator
// Simulate API call for next page
// In real implementation, replace with actual backend fetch
setTimeout(() => {
isLoading = false;
loadingIndicator.classList.add("hidden");
// Real app would fetch /generate/images?page=${currentPage +1}&limit=${perPage}
// and /generate/videos similarly
}, 1500);
}
});
// Cancel pending video buttons
document.querySelectorAll(".cancel-pending-btn").forEach((btn) => {
btn.addEventListener("click", async (e) => {
e.preventDefault();
e.stopPropagation();
const videoId = btn.dataset.videoId;
const msgEl = btn.parentElement.querySelector(".cancel-pending-msg");
btn.disabled = true;
btn.textContent = "Cancelling…";
try {
const resp = await fetch(
"/generate/video/" + encodeURIComponent(videoId) + "/cancel",
{ method: "POST" },
);
if (resp.ok) {
btn.classList.add("hidden");
if (msgEl) {
msgEl.textContent = "Cancelled";
msgEl.classList.remove("hidden", "text-red-500");
msgEl.classList.add("text-gray-300");
}
const card = document.querySelector(
'[data-pending-video-id="' + videoId + '"]',
);
if (card) {
const statusSpan = card.querySelector(".text-yellow-400");
if (statusSpan) {
statusSpan.textContent = "cancelled";
statusSpan.classList.remove("text-yellow-400");
statusSpan.classList.add("text-gray-400");
}
}
} else {
const data = await resp.json().catch(() => ({}));
btn.disabled = false;
btn.textContent = "Cancel";
if (msgEl) {
msgEl.textContent = data.detail || "Failed";
msgEl.classList.remove("hidden");
msgEl.classList.add("text-red-500");
}
}
} catch (err) {
btn.disabled = false;
btn.textContent = "Cancel";
if (msgEl) {
msgEl.textContent = "Error";
msgEl.classList.remove("hidden");
msgEl.classList.add("text-red-500");
}
}
});
});
});
</script>
{% endblock %}
</div>
+10 -3
View File
@@ -155,9 +155,9 @@ AI{% endblock %} {% block content %}
{% endif %} {% if result %}
<div class="result">
<h2>Video job</h2>
<p>Job ID: <code>{{ result.id }}</code></p>
{% if result.status in ('queued', 'processing') and result.polling_url %}
<div id="video-poll-status" data-polling-url="{{ result.polling_url }}">
<p>Job ID: <code>{{ result.db_id or result.id }}</code></p>
{% if result.status in ('queued', 'processing') and result.db_id %}
<div id="video-poll-status" data-video-id="{{ result.db_id }}">
<p>
<span id="poll-status-text"
>Status: <strong>{{ result.status }}</strong></span
@@ -165,6 +165,13 @@ AI{% endblock %} {% block content %}
&mdash; checking for updates every 5 s&hellip;
</p>
<div id="poll-video-container"></div>
<button
id="cancel-video-btn"
class="mt-2 px-4 py-2 bg-red-600 hover:bg-red-700 text-white rounded-md text-sm"
>
Cancel Job
</button>
<p id="cancel-msg" class="text-sm mt-2 hidden"></p>
</div>
{% elif result.video_url %}
<video
+9 -2
View File
@@ -12,11 +12,11 @@ block content %}
<div class="bg-gray-800 rounded-lg shadow-lg overflow-hidden">
{% if video.status == 'completed' and video.video_url %}
<video src="{{ video.video_url }}" controls class="w-full"></video>
{% elif video.status in ('queued', 'processing') and video.polling_url %}
{% elif video.status in ('queued', 'processing') %}
<div
class="w-full bg-black aspect-video flex flex-col items-center justify-center p-6 text-center"
id="video-poll-status"
data-polling-url="{{ video.polling_url }}"
data-video-id="{{ video.id }}"
>
<p class="text-xl font-semibold">
Status: <strong id="poll-status-text">{{ video.status }}</strong>
@@ -26,6 +26,13 @@ block content %}
it's ready.
</p>
<div class="spinner mt-4"></div>
<button
id="cancel-video-btn"
class="mt-4 px-4 py-2 bg-red-600 hover:bg-red-700 text-white rounded-md text-sm"
>
Cancel Job
</button>
<p id="cancel-msg" class="text-sm mt-2 hidden"></p>
</div>
{% elif video.status == 'failed' %}
<div