Files
ai.allucanget.biz/docs/8.1-openrouter.md
T

1.5 KiB

OpenRouter API Integration

Text Generation

Warning

TODO: Add more details on how the backend integrates with OpenRouter for text generation, including chat completions and single-prompt generation flows.

Image Generation

Image generation uses two different OpenRouter endpoints depending on the model:

  • Legacy endpoint (/images/generations): Used by DALL-E 3 and similar models. Returns data[].url and data[].b64_json.
  • Chat completions (/chat/completions with modalities: ["image"]): Used by FLUX.2 Klein 4B and GPT-5 Image Mini. Returns choices[0].message.images[].image_url.url as base64 data URLs.

The router auto-detects the model type and routes accordingly. Image configuration (aspect_ratio, image_size) is passed via image_config for chat-based models.

Video Generation

Video generation uses OpenRouter's /api/v1/videos endpoint with a submit-and-poll pattern:

  1. POST /api/v1/videos with model, prompt, aspect_ratio, resolution, duration_seconds
  2. Response: {"id": "job_id", "polling_url": "https://..."} with status: "queued"
  3. Poll GET polling_url every 5 seconds until status is "completed" or "failed"
  4. Completed response includes unsigned_urls: [str] array with video download URLs

Supported models: openai/sora-2-pro, google/veo-3.1-fast. Both text-to-video and image-to-video use the same /api/v1/videos endpoint (image-to-video includes image_url in the request body).