feat: add video job cancellation functionality and error tracking in generated videos
Co-authored-by: Copilot <copilot@github.com>
This commit is contained in:
+11
-6
@@ -16,11 +16,16 @@ The router auto-detects the model type and routes accordingly. Image configurati
|
||||
|
||||
## Video Generation
|
||||
|
||||
Video generation uses OpenRouter's `/api/v1/videos` endpoint with a **submit-and-poll** pattern:
|
||||
Video generation uses OpenRouter's `/api/v1/videos` endpoint with a **submit-and-poll** pattern orchestrated by a background worker:
|
||||
|
||||
1. `POST /api/v1/videos` with `model`, `prompt`, `aspect_ratio`, `resolution`, `duration_seconds`
|
||||
2. Response: `{"id": "job_id", "polling_url": "https://..."}` with `status: "queued"`
|
||||
3. Poll `GET polling_url` every 5 seconds until `status` is `"completed"` or `"failed"`
|
||||
4. Completed response includes `unsigned_urls: [str]` array with video download URLs
|
||||
1. User submits a video request via `POST /generate/video` (or `/generate/video/from-image`)
|
||||
2. Backend inserts a row into `generated_videos` with `status: "queued"` and returns immediately
|
||||
3. Background worker (`video_worker.py`) picks up queued jobs every 15 seconds:
|
||||
- Calls `POST /api/v1/videos` with `model`, `prompt`, `aspect_ratio`, `resolution`, `duration`
|
||||
- Receives `{"id": "job_id", "polling_url": "https://..."}` and updates DB to `status: "processing"`
|
||||
- Polls `GET polling_url` every 15 seconds until `status` is `"completed"` or `"failed"`
|
||||
- Updates DB with final status, `video_url`, and any `error` message
|
||||
4. Frontend polls `GET /generate/video/{db_id}/status` every 5 seconds to show live updates
|
||||
5. Completed response includes `video_url` — the video is displayed in a `<video>` element
|
||||
|
||||
Supported models: `openai/sora-2-pro`, `google/veo-3.1-fast`. Both text-to-video and image-to-video use the same `/api/v1/videos` endpoint (image-to-video includes `image_url` in the request body).
|
||||
Supported models: `openai/sora-2-pro`, `google/veo-3.1-fast`. Both text-to-video and image-to-video use the same `/api/v1/videos` endpoint (image-to-video includes `frame_images` with `first_frame` in the request body).
|
||||
|
||||
Reference in New Issue
Block a user