# Siray.ai API Docs ## Docs - [Introduction](https://docs.siray.ai/api-reference/introduction.md): Siray uses a unified API key structure, simplifying integration and allowing access to all models from a single credential. - [Agnes 1.5 Pro](https://docs.siray.ai/api-reference/model-api/agnes-1.5-pro.md) - [claude-haiku-4-5-20251001](https://docs.siray.ai/api-reference/model-api/claude-haiku-4-5-20251001.md) - [Claude Haiku 4.5](https://docs.siray.ai/api-reference/model-api/claude-haiku-4.5.md) - [Claude Opus 4.1 Thinking](https://docs.siray.ai/api-reference/model-api/claude-opus-4.1-thinking.md) - [Claude Opus 4.5](https://docs.siray.ai/api-reference/model-api/claude-opus-4.5.md) - [Claude Opus 4.5 Thinking](https://docs.siray.ai/api-reference/model-api/claude-opus-4.5-thinking.md) - [Claude Opus 4.6](https://docs.siray.ai/api-reference/model-api/claude-opus-4.6.md) - [claude-sonnet-4-5-20250929](https://docs.siray.ai/api-reference/model-api/claude-sonnet-4-5-20250929.md) - [Claude Sonnet 4.5](https://docs.siray.ai/api-reference/model-api/claude-sonnet-4.5.md) - [Claude Sonnet 4.6](https://docs.siray.ai/api-reference/model-api/claude-sonnet-4.6.md) - [DeepSeek R1](https://docs.siray.ai/api-reference/model-api/deepseek-r1.md) - [DeepSeek V3.1](https://docs.siray.ai/api-reference/model-api/deepseek-v3.1.md) - [DeepSeek V3.1 Terminus](https://docs.siray.ai/api-reference/model-api/deepseek-v3.1-terminus.md) - [DeepSeek V3.2](https://docs.siray.ai/api-reference/model-api/deepseek-v3.2.md) - [DeepSeek V3.2 Exp](https://docs.siray.ai/api-reference/model-api/deepseek-v3.2-exp.md) - [Embedding Model Example](https://docs.siray.ai/api-reference/model-api/example-usage-genai-embedding.md): Examples for python - [Image Generation Example](https://docs.siray.ai/api-reference/model-api/example-usage-image.md): Examples for python, nodejs and http. - [Flux 1.1 Pro i2i](https://docs.siray.ai/api-reference/model-api/flux-1.1-pro-i2i.md) - [Flux 1.1 Pro t2i](https://docs.siray.ai/api-reference/model-api/flux-1.1-pro-t2i.md) - [Flux 1.1 Pro t2i Test](https://docs.siray.ai/api-reference/model-api/flux-1.1-pro-t2i-test.md) - [Flux 1.1 Pro Ultra i2i](https://docs.siray.ai/api-reference/model-api/flux-1.1-pro-ultra-i2i.md) - [Flux 1.1 Pro Ultra t2i](https://docs.siray.ai/api-reference/model-api/flux-1.1-pro-ultra-t2i.md) - [Flux Kontext i2i Max](https://docs.siray.ai/api-reference/model-api/flux-kontext-i2i-max.md) - [Flux Kontext i2i Pro](https://docs.siray.ai/api-reference/model-api/flux-kontext-i2i-pro.md) - [Flux Kontext t2i Max](https://docs.siray.ai/api-reference/model-api/flux-kontext-t2i-max.md) - [Flux Kontext t2i Pro](https://docs.siray.ai/api-reference/model-api/flux-kontext-t2i-pro.md) - [Gemini 2.0 Flash](https://docs.siray.ai/api-reference/model-api/gemini-2.0-flash.md) - [Gemini 2.5 Flash](https://docs.siray.ai/api-reference/model-api/gemini-2.5-flash.md) - [Gemini 2.5 Flash Lite](https://docs.siray.ai/api-reference/model-api/gemini-2.5-flash-lite.md) - [Gemini 2.5 Pro](https://docs.siray.ai/api-reference/model-api/gemini-2.5-pro.md) - [Gemini 3 Flash Preview](https://docs.siray.ai/api-reference/model-api/gemini-3-flash-preview.md) - [Gemini 3 Pro Image Preview (Nano Banana Pro)](https://docs.siray.ai/api-reference/model-api/gemini-3-pro-image-preview.md) - [Gemini 3.1 Flash Image Preview (Nano Banana 2)](https://docs.siray.ai/api-reference/model-api/gemini-3.1-flash-image-preview.md) - [Gemini 3.1 Pro Preview](https://docs.siray.ai/api-reference/model-api/gemini-3.1-pro-preview.md) - [Gemini Embedding 001](https://docs.siray.ai/api-reference/model-api/gemini-embedding-001.md) - [GLM 4.6V Flash](https://docs.siray.ai/api-reference/model-api/glm-4.6v-flash.md) - [GLM 4.7](https://docs.siray.ai/api-reference/model-api/glm-4.7.md) - [GLM 5](https://docs.siray.ai/api-reference/model-api/glm-5.md) - [GPT 4.1](https://docs.siray.ai/api-reference/model-api/gpt-4.1.md) - [GPT 4.1 Mini](https://docs.siray.ai/api-reference/model-api/gpt-4.1-mini.md) - [GPT 4.1 Nano](https://docs.siray.ai/api-reference/model-api/gpt-4.1-nano.md) - [GPT 4o](https://docs.siray.ai/api-reference/model-api/gpt-4o.md) - [GPT 4o mini](https://docs.siray.ai/api-reference/model-api/gpt-4o-mini.md) - [GPT 5](https://docs.siray.ai/api-reference/model-api/gpt-5.md) - [GPT 5 Chat](https://docs.siray.ai/api-reference/model-api/gpt-5-chat.md) - [GPT 5 CodeX](https://docs.siray.ai/api-reference/model-api/gpt-5-codex.md) - [GPT 5 Nano](https://docs.siray.ai/api-reference/model-api/gpt-5-nano.md) - [GPT 5.1](https://docs.siray.ai/api-reference/model-api/gpt-5.1.md) - [GPT 5.1 Chat](https://docs.siray.ai/api-reference/model-api/gpt-5.1-chat.md) - [GPT 5.1 CodeX](https://docs.siray.ai/api-reference/model-api/gpt-5.1-codex.md) - [GPT 5.1 CodeX Mini](https://docs.siray.ai/api-reference/model-api/gpt-5.1-codex-mini.md) - [GPT 5.2](https://docs.siray.ai/api-reference/model-api/gpt-5.2.md) - [GPT 5.2 Chat](https://docs.siray.ai/api-reference/model-api/gpt-5.2-chat.md) - [GPT 5.2 CodeX](https://docs.siray.ai/api-reference/model-api/gpt-5.2-codex.md) - [GPT 5.3 CodeX](https://docs.siray.ai/api-reference/model-api/gpt-5.3-codex.md) - [GPT 5.4](https://docs.siray.ai/api-reference/model-api/gpt-5.4.md) - [GPT Image 1.5 i2i High](https://docs.siray.ai/api-reference/model-api/gpt-image-1.5-i2i-high.md) - [GPT Image 1.5 i2i Low](https://docs.siray.ai/api-reference/model-api/gpt-image-1.5-i2i-low.md) - [GPT Image 1.5 i2i Medium](https://docs.siray.ai/api-reference/model-api/gpt-image-1.5-i2i-medium.md) - [GPT Image 1.5 ref2i High](https://docs.siray.ai/api-reference/model-api/gpt-image-1.5-ref2i-high.md) - [GPT Image 1.5 ref2i Low](https://docs.siray.ai/api-reference/model-api/gpt-image-1.5-ref2i-low.md) - [GPT Image 1.5 ref2i Medium](https://docs.siray.ai/api-reference/model-api/gpt-image-1.5-ref2i-medium.md) - [GPT Image 1.5 t2i High](https://docs.siray.ai/api-reference/model-api/gpt-image-1.5-t2i-high.md) - [GPT Image 1.5 t2i Low](https://docs.siray.ai/api-reference/model-api/gpt-image-1.5-t2i-low.md) - [GPT Image 1.5 t2i Medium](https://docs.siray.ai/api-reference/model-api/gpt-image-1.5-t2i-medium.md) - [GPT oss 120b](https://docs.siray.ai/api-reference/model-api/gpt-oss-120b.md) - [GPT oss 20b](https://docs.siray.ai/api-reference/model-api/gpt-oss-20b.md) - [Grok 4](https://docs.siray.ai/api-reference/model-api/grok-4.md) - [Grok 4 Fast Non Reasoning](https://docs.siray.ai/api-reference/model-api/grok-4-fast-non-reasoning.md) - [Grok 4 Fast Reasoning](https://docs.siray.ai/api-reference/model-api/grok-4-fast-reasoning.md) - [Grok 4.1 Fast Non Reasoning](https://docs.siray.ai/api-reference/model-api/grok-4.1-fast-non-reasoning.md) - [Grok 4.1 Fast Reasoning](https://docs.siray.ai/api-reference/model-api/grok-4.1-fast-reasoning.md) - [Grok Imagine Image i2i](https://docs.siray.ai/api-reference/model-api/grok-imagine-image-i2i.md) - [Grok Imagine Image t2i](https://docs.siray.ai/api-reference/model-api/grok-imagine-image-t2i.md) - [Grok Imagine Video Extension](https://docs.siray.ai/api-reference/model-api/grok-imagine-video-extension.md) - [Grok Imagine Video i2v](https://docs.siray.ai/api-reference/model-api/grok-imagine-video-i2v.md) - [Grok Imagine Video t2v](https://docs.siray.ai/api-reference/model-api/grok-imagine-video-t2v.md) - [Hunyuan Image 3 Instruct i2i](https://docs.siray.ai/api-reference/model-api/hunyuan-image-3-instruct-i2i.md) - [Hunyuan Image 3 Instruct t2i](https://docs.siray.ai/api-reference/model-api/hunyuan-image-3-instruct-t2i.md) - [Hunyuan3d V2.5 Rapid image-to-3d](https://docs.siray.ai/api-reference/model-api/hunyuan3d-v2.5-rapid-image-to-3d.md) - [Hunyuan3d V2.5 Rapid text-to-3d](https://docs.siray.ai/api-reference/model-api/hunyuan3d-v2.5-rapid-text-to-3d.md) - [Kimi K2.5](https://docs.siray.ai/api-reference/model-api/kimi-k2.5.md) - [Kling 1.6 Pro i2v](https://docs.siray.ai/api-reference/model-api/kling-1.6-pro-i2v.md) - [Kling 1.6 Standard i2v](https://docs.siray.ai/api-reference/model-api/kling-1.6-standard-i2v.md) - [Kling 1.6 Standard t2v](https://docs.siray.ai/api-reference/model-api/kling-1.6-standard-t2v.md) - [Kling 2.1 Master i2v](https://docs.siray.ai/api-reference/model-api/kling-2.1-master-i2v.md) - [Kling 2.1 Standard i2v](https://docs.siray.ai/api-reference/model-api/kling-2.1-standard-i2v.md) - [Kling 2.6 Motion Control](https://docs.siray.ai/api-reference/model-api/kling-2.6-motion-control.md) - [Kling 2.6 Pro i2v](https://docs.siray.ai/api-reference/model-api/kling-2.6-pro-i2v.md) - [Kling 2.6 Pro Motion Control](https://docs.siray.ai/api-reference/model-api/kling-2.6-pro-motion-control.md) - [Kling 2.6 Pro t2v](https://docs.siray.ai/api-reference/model-api/kling-2.6-pro-t2v.md) - [Kling 3 Motion Control](https://docs.siray.ai/api-reference/model-api/kling-3-motion-control.md) - [Kling 3.0 i2v](https://docs.siray.ai/api-reference/model-api/kling-3.0-i2v.md) - [Kling 3.0 t2v](https://docs.siray.ai/api-reference/model-api/kling-3.0-t2v.md) - [Midjourney Niji 6 t2i](https://docs.siray.ai/api-reference/model-api/midjourney-niji-6-t2i.md) - [Midjourney Niji 7 t2i](https://docs.siray.ai/api-reference/model-api/midjourney-niji-7-t2i.md) - [Midjourney V6 t2i](https://docs.siray.ai/api-reference/model-api/midjourney-v6-t2i.md) - [Midjourney V6.1 t2i](https://docs.siray.ai/api-reference/model-api/midjourney-v6.1-t2i.md) - [Midjourney V7 t2i](https://docs.siray.ai/api-reference/model-api/midjourney-v7-t2i.md) - [MiMo V2 Flash](https://docs.siray.ai/api-reference/model-api/mimo-v2-flash.md) - [MiniMax M2.5](https://docs.siray.ai/api-reference/model-api/minimax-m2.5.md) - [Nano Banana 2 i2i](https://docs.siray.ai/api-reference/model-api/nano-banana-2-i2i.md) - [Nano Banana 2 t2i](https://docs.siray.ai/api-reference/model-api/nano-banana-2-t2i.md) - [Nano Banana i2i](https://docs.siray.ai/api-reference/model-api/nano-banana-i2i.md) - [Nano Banana Pro i2i](https://docs.siray.ai/api-reference/model-api/nano-banana-pro-i2i.md) - [Nano Banana Pro t2i](https://docs.siray.ai/api-reference/model-api/nano-banana-pro-t2i.md) - [Nano Banana t2i](https://docs.siray.ai/api-reference/model-api/nano-banana-t2i.md) - [o1](https://docs.siray.ai/api-reference/model-api/o1.md) - [o3](https://docs.siray.ai/api-reference/model-api/o3.md) - [PixVerse C1 i2v](https://docs.siray.ai/api-reference/model-api/pixverse-c1-i2v.md) - [PixVerse C1 t2v](https://docs.siray.ai/api-reference/model-api/pixverse-c1-t2v.md) - [PixVerse V5.6 i2v](https://docs.siray.ai/api-reference/model-api/pixverse-v5.6-i2v.md) - [PixVerse V5.6 t2v](https://docs.siray.ai/api-reference/model-api/pixverse-v5.6-t2v.md) - [PixVerse V6 i2v](https://docs.siray.ai/api-reference/model-api/pixverse-v6-i2v.md) - [PixVerse V6 t2v](https://docs.siray.ai/api-reference/model-api/pixverse-v6-t2v.md) - [Qwen 3.5 Plus](https://docs.siray.ai/api-reference/model-api/qwen-3.5-plus.md) - [Qwen Long](https://docs.siray.ai/api-reference/model-api/qwen-long.md) - [Qwen Plus](https://docs.siray.ai/api-reference/model-api/qwen-plus.md) - [Qwen3 Coder 480B A35B Instruct](https://docs.siray.ai/api-reference/model-api/qwen3-coder-480b-a35b-instruct.md) - [Qwen3 Max 256K](https://docs.siray.ai/api-reference/model-api/qwen3-max-256k.md) - [Qwen3 Omni Flash](https://docs.siray.ai/api-reference/model-api/qwen3-omni-flash.md) - [Seed 1.6](https://docs.siray.ai/api-reference/model-api/seed-1.6.md) - [Seedance 1.0 Lite i2v](https://docs.siray.ai/api-reference/model-api/seedance-1.0-lite-i2v.md) - [Seedance 1.0 Lite t2v](https://docs.siray.ai/api-reference/model-api/seedance-1.0-lite-t2v.md) - [Seedance 1.0 Pro Fast t2v](https://docs.siray.ai/api-reference/model-api/seedance-1.0-pro-fast-t2v.md) - [Seedance 1.0 Pro i2v](https://docs.siray.ai/api-reference/model-api/seedance-1.0-pro-i2v.md) - [Seedance 1.0 Pro t2v](https://docs.siray.ai/api-reference/model-api/seedance-1.0-pro-t2v.md) - [Seedance 1.5 Pro i2v](https://docs.siray.ai/api-reference/model-api/seedance-1.5-pro-i2v.md) - [Seedance 1.5 Pro se2v](https://docs.siray.ai/api-reference/model-api/seedance-1.5-pro-se2v.md) - [Seedance 1.5 Pro t2v](https://docs.siray.ai/api-reference/model-api/seedance-1.5-pro-t2v.md) - [Seedance 2.0 Fast i2v](https://docs.siray.ai/api-reference/model-api/seedance-2.0-fast-i2v.md) - [Seedance 2.0 Fast t2v](https://docs.siray.ai/api-reference/model-api/seedance-2.0-fast-t2v.md) - [Seedance 2.0 i2v](https://docs.siray.ai/api-reference/model-api/seedance-2.0-i2v.md) - [Seedance 2.0 t2v](https://docs.siray.ai/api-reference/model-api/seedance-2.0-t2v.md) - [Seedream 4.0 i2i](https://docs.siray.ai/api-reference/model-api/seedream-4.0-i2i.md) - [Seedream 4.0 ref2i](https://docs.siray.ai/api-reference/model-api/seedream-4.0-ref2i.md) - [Seedream 4.0 t2i](https://docs.siray.ai/api-reference/model-api/seedream-4.0-t2i.md) - [Seedream 4.5 i2i](https://docs.siray.ai/api-reference/model-api/seedream-4.5-i2i.md) - [Seedream 4.5 ref2i](https://docs.siray.ai/api-reference/model-api/seedream-4.5-ref2i.md) - [Seedream 4.5 t2i](https://docs.siray.ai/api-reference/model-api/seedream-4.5-t2i.md) - [Sora 2 i2v](https://docs.siray.ai/api-reference/model-api/sora-2-i2v.md) - [Sora 2 t2v](https://docs.siray.ai/api-reference/model-api/sora-2-t2v.md) - [Text Embedding 3 Large](https://docs.siray.ai/api-reference/model-api/text-embedding-3-large.md) - [Text Embedding 3 Small](https://docs.siray.ai/api-reference/model-api/text-embedding-3-small.md) - [Tripo3D V2.5 image-to-3d](https://docs.siray.ai/api-reference/model-api/tripo3d-v2.5-image-to-3d.md) - [Veo 3.1 i2v](https://docs.siray.ai/api-reference/model-api/veo-3.1-i2v.md) - [Veo 3.1 t2v](https://docs.siray.ai/api-reference/model-api/veo-3.1-t2v.md) - [Vidu Q2-pro-fast i2v](https://docs.siray.ai/api-reference/model-api/vidu-q2-pro-fast-i2v.md) - [Vidu Q2-pro i2v](https://docs.siray.ai/api-reference/model-api/vidu-q2-pro-i2v.md) - [Vidu Q2-pro se2v](https://docs.siray.ai/api-reference/model-api/vidu-q2-pro-se2v.md) - [Vidu Q2 t2v](https://docs.siray.ai/api-reference/model-api/vidu-q2-t2v.md) - [Vidu Q3-pro i2v](https://docs.siray.ai/api-reference/model-api/vidu-q3-pro-i2v.md) - [Vidu Q3-pro t2v](https://docs.siray.ai/api-reference/model-api/vidu-q3-pro-t2v.md) - [Wan 2.6 i2v](https://docs.siray.ai/api-reference/model-api/wan-2.6-i2v.md) - [Wan 2.6 t2v](https://docs.siray.ai/api-reference/model-api/wan-2.6-t2v.md) - [Z-Image Turbo t2i](https://docs.siray.ai/api-reference/model-api/z-image-turbo-t2i.md) - [3D Model Task Status API](https://docs.siray.ai/api-reference/task-status-3d.md): Monitor asynchronous 3D model generation tasks and download the produced 3D assets. - [Image Task Status API](https://docs.siray.ai/api-reference/task-status-image.md): Check asynchronous t2i/i2i generation tasks and retrieve the generated image URLs. - [Video Task Status API](https://docs.siray.ai/api-reference/task-status-video.md): Monitor asynchronous t2v/i2v generation tasks and download the produced video assets. - [September 5, 2025 Product Updates](https://docs.siray.ai/changelog/05-09-25.md) - [GPU Quickstart](https://docs.siray.ai/gpu-deploy/gpu-instance-guide.md): Deploy a remote GPU in minutes. - [Overview](https://docs.siray.ai/gpu-deploy/gpu-overview.md): Siray provides dedicated GPU instances to power your AI workloads with maximum performance and reliability. - [Introduction](https://docs.siray.ai/index.md): Siray empower users to unleash stronger AI productivity at a significantly lower cost. - [Qwen 3.5 Plus](https://docs.siray.ai/model-apis/alibaba/qwen-3.5-plus.md): Qwen 3.5 Plus is Alibaba's enhanced large language model with superior reasoning and multilingual capabilities. Deliver powerful AI performance with excellent Chinese and English support. - [Qwen Long](https://docs.siray.ai/model-apis/alibaba/qwen-long.md): Alibaba's Qwen Long is a powerful LLM specialized for robust, high-fidelity processing and generation of ultra-long context documents. - [Qwen Plus](https://docs.siray.ai/model-apis/alibaba/qwen-plus.md): Alibaba's Qwen Plus is an advanced general-purpose LLM that balances powerful reasoning, speed, and cost for diverse, high-value enterprise applications. - [Qwen3 Coder 480B A35B Instruct](https://docs.siray.ai/model-apis/alibaba/qwen3-coder-480b-a35b-instruct.md): Alibaba's Qwen3 Coder 480B A35B Instruct is a powerful Mixture-of-Experts (MoE) coding model with a 256K context window, specialized for agentic coding, tool use, and understanding massive codebases. - [Qwen3 Max 256K](https://docs.siray.ai/model-apis/alibaba/qwen3-max-256k.md): Alibaba's Qwen3 Max is a premium LLM with an ultra-long 256K context for deep document analysis, complex reasoning, and high-quality generation. - [Qwen3 Omni Flash](https://docs.siray.ai/model-apis/alibaba/qwen3-omni-flash.md): Alibaba's Qwen3 Omni Flash is a unified, real-time multimodal MoE model that handles text, image, audio, and video with superior performance. - [Wan 2.6 i2v](https://docs.siray.ai/model-apis/alibaba/wan-2.6-i2v.md): WAN 2.6 I2V 1080 animates static images into high-resolution videos while preserving visual structure and style. - [Wan 2.6 t2v](https://docs.siray.ai/model-apis/alibaba/wan-2.6-t2v.md): Wan 2.6 t2v is Alibaba's text-to-video model from the Wan AI series. Generate high-quality videos from text prompts with strong Chinese and English language understanding. - [Z-Image Turbo t2i](https://docs.siray.ai/model-apis/alibaba/z-image-turbo-t2i.md): Z-Image is a 6-billion-parameter image generation model built on a Single-Stream Diffusion Transformer, delivering photorealistic images and bilingual text rendering with low hardware requirements. - [Claude Haiku 4.5](https://docs.siray.ai/model-apis/anthropic/claude-haiku-4.5.md): Anthropic's Claude Haiku 4.5 LLM offers its fastest speed and lowest cost for high-volume agentic tasks and coding within the standard 200K context. - [Claude Opus 4.1 Thinking](https://docs.siray.ai/model-apis/anthropic/claude-opus-4.1-thinking.md): Anthropic's Claude Opus 4.1 is their flagship LLM with Extended Thinking, providing deep, step-by-step reasoning for the most complex coding and agentic problems. - [Claude Opus 4.5](https://docs.siray.ai/model-apis/anthropic/claude-opus-4.5.md): Claude Opus 4.5 is Anthropic's flagship model, offering state-of-the-art intelligence, human-like reasoning, and superior performance across complex, high-stakes analytical tasks. - [Claude Opus 4.5 Thinking](https://docs.siray.ai/model-apis/anthropic/claude-opus-4.5-thinking.md): Claude 4.5 Opus Thinking is a high-precision reasoning model designed for complex decision-making, deep analysis, and reliable long-form generation across enterprise and creator workflows. - [Claude Opus 4.6](https://docs.siray.ai/model-apis/anthropic/claude-opus-4.6.md): Claude Opus 4.6 is Anthropic's most intelligent and capable large language model. Excel at complex reasoning, coding, analysis, and nuanced creative tasks with state-of-the-art performance. - [Claude Sonnet 4.5](https://docs.siray.ai/model-apis/anthropic/claude-sonnet-4.5.md): Claude Sonnet 4.5 is a developer-focused AI model offering precise reasoning, clean code output, and excellent instruction following. - [Claude Sonnet 4.6](https://docs.siray.ai/model-apis/anthropic/claude-sonnet-4.6.md): Claude Sonnet 4.6 is Anthropic's balanced large language model combining strong capabilities with efficient performance. Ideal for production workloads requiring quality and speed. - [API Integration](https://docs.siray.ai/model-apis/api-integration.md): Siray API is designed to provide a consistent interface across multiple platforms, simplifying development. - [Flux 1.1 Pro i2i](https://docs.siray.ai/model-apis/black-forest-labs/flux-1.1-pro-i2i.md): Black Forest Labs' Flux 1.1 Pro is an advanced, high-speed Image-to-Image model ideal for fast, high-quality style transfer and image-guided generation. - [Flux 1.1 Pro t2i](https://docs.siray.ai/model-apis/black-forest-labs/flux-1.1-pro-t2i.md): Black Forest Labs' Flux 1.1 Pro is a fast, state-of-the-art Text-to-Image model for high-quality visual generation with superior speed and prompt adherence. - [Flux 1.1 Pro t2i Test](https://docs.siray.ai/model-apis/black-forest-labs/flux-1.1-pro-t2i-test.md): Black Forest Labs' Flux 1.1 Pro is a fast, state-of-the-art Text-to-Image model for high-quality visual generation with superior speed and prompt adherence. - [Flux 1.1 Pro Ultra i2i](https://docs.siray.ai/model-apis/black-forest-labs/flux-1.1-pro-ultra-i2i.md): Black Forest Labs' Flux 1.1 Pro Ultra is a premium Image-to-Image model for ultra-high-resolution style transfer and image-guided generation up to 4MP. - [Flux 1.1 Pro Ultra t2i](https://docs.siray.ai/model-apis/black-forest-labs/flux-1.1-pro-ultra-t2i.md): Black Forest Labs' Flux 1.1 Pro Ultra is a flagship Text-to-Image model that generates ultra-fast, high-fidelity images up to 4MP resolution with new Ultra/Raw modes. - [Flux Kontext i2i Max](https://docs.siray.ai/model-apis/black-forest-labs/flux-kontext-i2i-max.md): Black Forest Labs' Flux Kontext Max is the premium In-Context Image Editing model offering maximum performance in precise editing, style consistency, and typography control. - [Flux Kontext i2i Pro](https://docs.siray.ai/model-apis/black-forest-labs/flux-kontext-i2i-pro.md): Black Forest Labs' Flux Kontext Pro is a powerful In-Context Image Editing model for precise, consistent edits, object modifications, and local region control via text. - [Flux Kontext t2i Max](https://docs.siray.ai/model-apis/black-forest-labs/flux-kontext-t2i-max.md): Black Forest Labs' Flux Kontext Max is the highest-tier Generative Flow model pushing prompt precision, superior typography, and character consistency in T2I. - [Flux Kontext t2i Pro](https://docs.siray.ai/model-apis/black-forest-labs/flux-kontext-t2i-pro.md): Black Forest Labs' Flux Kontext Pro is a unified Generative Flow model for text-to-image generation that excels in maintaining character and style consistency. - [Seed 1.6](https://docs.siray.ai/model-apis/bytedance/seed-1.6.md): ByteDance's Doubao Seed 1.6 is a versatile Multimodal Deep Thinking model supporting text and visual inputs with enhanced reasoning and smart tool-calling. - [Seedance 1.0 Lite i2v](https://docs.siray.ai/model-apis/bytedance/seedance-1.0-lite-i2v.md): ByteDance Seedance 1.0 Lite I2V model. Efficient 720p Image-to-Video AI for cost-effective, high-volume video synthesis and quick visual iterations.Description of your new file. - [Seedance 1.0 Lite t2v](https://docs.siray.ai/model-apis/bytedance/seedance-1.0-lite-t2v.md): ByteDance Seedance 1.0 Lite T2V model. Fast, cost-effective 720p Text-to-Video AI for rapid synthesis and mass creative video generation. - [Seedance 1.0 Pro Fast t2v](https://docs.siray.ai/model-apis/bytedance/seedance-1.0-pro-fast-t2v.md): ByteDance Seedance 1.0 Pro Fast T2V model. Rapid Text-to-Video generation for efficient visual concept testing and high-volume asset creation. - [Seedance 1.0 Pro i2v](https://docs.siray.ai/model-apis/bytedance/seedance-1.0-pro-i2v.md): Seedance 1.0 Pro i2v is ByteDance's foundational professional image-to-video model. Animate static images into video sequences with proven stability and reliable results. - [Seedance 1.0 Pro t2v](https://docs.siray.ai/model-apis/bytedance/seedance-1.0-pro-t2v.md): Seedance 1.0 Pro t2v is ByteDance's foundational professional text-to-video model. Generate quality videos from text prompts with proven reliability and solid performance. - [Seedance 1.5 Pro i2v](https://docs.siray.ai/model-apis/bytedance/seedance-1.5-pro-i2v.md): Seedance 1.5 Pro i2v is ByteDance's professional image-to-video model. Transform static images into dynamic video sequences with impressive motion quality and detail. - [Seedance 1.5 Pro se2v](https://docs.siray.ai/model-apis/bytedance/seedance-1.5-pro-se2v.md): Seedance 1.5 Pro se2v is ByteDance's professional start-end-to-video model. Generate smooth video transitions between keyframes with high-quality motion interpolation. - [Seedance 1.5 Pro t2v](https://docs.siray.ai/model-apis/bytedance/seedance-1.5-pro-t2v.md): Seedance 1.5 Pro t2v is ByteDance's professional text-to-video model. Generate high-quality videos from text prompts with excellent motion dynamics and visual appeal. - [Seedance 2.0 Fast i2v](https://docs.siray.ai/model-apis/bytedance/seedance-2.0-fast-i2v.md): Seedance 2.0 Fast i2v is ByteDance's speed-optimized image-to-video model. Rapidly animate images into videos with Seedance 2.0 quality at accelerated processing speeds. - [Seedance 2.0 Fast t2v](https://docs.siray.ai/model-apis/bytedance/seedance-2.0-fast-t2v.md): Seedance 2.0 Fast t2v is ByteDance's speed-optimized text-to-video model. Generate quality videos from text prompts rapidly with Seedance 2.0 capabilities at turbo speeds. - [Seedance 2.0 i2v](https://docs.siray.ai/model-apis/bytedance/seedance-2.0-i2v.md): Seedance 2.0 i2v is ByteDance's next-generation image-to-video model with major quality improvements. Transform images into stunning videos with enhanced motion and visual fidelity. - [Seedance 2.0 t2v](https://docs.siray.ai/model-apis/bytedance/seedance-2.0-t2v.md): Seedance 2.0 t2v is ByteDance's next-generation text-to-video model with breakthrough quality. Generate stunning videos from text prompts with industry-leading motion and realism. - [Seedream 4.0 i2i](https://docs.siray.ai/model-apis/bytedance/seedream-4.0-i2i.md): ByteDance Seedream 4.0 I2I model. Next-gen Image-to-Image AI with enhanced control and fidelity for professional-grade composition and editing. - [Seedream 4.0 ref2i](https://docs.siray.ai/model-apis/bytedance/seedream-4.0-ref2i.md): Seedream 4.0 Image-to-Image is a powerful AI model that transforms existing images into enhanced or re-styled visuals while preserving structure, details, and creative intent. - [Seedream 4.0 t2i](https://docs.siray.ai/model-apis/bytedance/seedream-4.0-t2i.md): ByteDance Seedream 4.0 T2I model. State-of-the-art Text-to-Image AI, delivering superior prompt adherence and refined aesthetics for stunning visual art. - [Seedream 4.5 i2i](https://docs.siray.ai/model-apis/bytedance/seedream-4.5-i2i.md): Seedream 4.5 Image-to-Image is an advanced AI model that transforms existing images into refined, creative outputs while preserving structure and visual intent. - [Seedream 4.5 ref2i](https://docs.siray.ai/model-apis/bytedance/seedream-4.5-ref2i.md): Seedream 4.5 Image-to-Image enables precise AI-powered image transformations, helping creators iterate, enhance, and restyle visuals with consistency. - [Seedream 4.5 t2i](https://docs.siray.ai/model-apis/bytedance/seedream-4.5-t2i.md): Seedream 4.5 is an advanced AI image-generation and editing model by ByteDance, delivering high-fidelity visuals, precise prompt adherence, and professional-grade consistency for design and creative workflows. - [claude-haiku-4-5-20251001](https://docs.siray.ai/model-apis/claude-haiku-4-5-20251001.md): Anthropic's Claude Haiku 4.5 is the efficient LLM that scales its context window past 200K tokens, ideal for specialized, massive long-context document analysis. - [claude-sonnet-4-5-20250929](https://docs.siray.ai/model-apis/claude-sonnet-4-5-20250929.md): Claude Sonnet 4.5 delivers enterprise-grade AI performance with enhanced reasoning, safety, and stability for production-ready applications. - [DeepSeek R1](https://docs.siray.ai/model-apis/deepseek/deepseek-r1.md): DeepSeek AI's R1 is a powerful Reasoning Model (MoE, 671B params) utilizing RL to excel in complex, step-by-step problem-solving, math, and coding tasks. - [DeepSeek V3.1](https://docs.siray.ai/model-apis/deepseek/deepseek-v3.1.md): DeepSeek V3.1 is an advanced large language model optimized for fast reasoning, coding accuracy, and complex problem solving across technical and analytical tasks. - [DeepSeek V3.1 Terminus](https://docs.siray.ai/model-apis/deepseek/deepseek-v3.1-terminus.md): DeepSeek AI's V3.1 Terminus is a flagship Hybrid MoE LLM with 128K context, optimized for superior agentic workflows, reliable tool use, and robust code generation. - [DeepSeek V3.2](https://docs.siray.ai/model-apis/deepseek/deepseek-v3.2.md): DeepSeek V3.2 is a high-performance language model built for advanced reasoning, fast generation, and efficient large-scale processing across enterprise and developer workflows. - [DeepSeek V3.2 Exp](https://docs.siray.ai/model-apis/deepseek/deepseek-v3.2-exp.md): DeepSeek AI's V3.2 Exp is an Experimental MoE LLM introducing DeepSeek Sparse Attention for drastically lower long-context inference cost while maintaining V3.1's quality. - [Gemini 2.0 Flash](https://docs.siray.ai/model-apis/google/gemini-2.0-flash.md): Google's Gemini 2.0 Flash is a fast, powerful Multimodal LLM with a 1M context window, optimized for high-speed retrieval, coding, and built-in tool use. - [Gemini 2.5 Flash](https://docs.siray.ai/model-apis/google/gemini-2.5-flash.md): Google's Gemini 2.5 Flash is a highly efficient Multimodal LLM, offering the best balance of speed, multimodal reasoning, and cost for high-volume agentic tasks. - [Gemini 2.5 Flash Lite](https://docs.siray.ai/model-apis/google/gemini-2.5-flash-lite.md): Google's Gemini 2.5 Flash Lite is their fastest, most cost-effective Multimodal LLM, optimized for maximum efficiency in high-volume, low-latency applications. - [Gemini 2.5 Pro](https://docs.siray.ai/model-apis/google/gemini-2.5-pro.md): Google’s Gemini 2.5 Pro is their most advanced Multimodal LLM with “Thinking” capabilities, excelling at complex reasoning, advanced coding, and 1M context analysis. - [Gemini 3 Flash Preview](https://docs.siray.ai/model-apis/google/gemini-3-flash-preview.md): Gemini 3 Flash Preview is a fast, lightweight multimodal model optimized for low-latency reasoning and real-time AI interactions. - [Gemini 3 Pro Image Preview (Nano Banana Pro)](https://docs.siray.ai/model-apis/google/gemini-3-pro-image-preview.md): Google‘s Gemini 3 Pro Image Preview – codenamed Nano Banana 2 – is the 2026 ultra-fast multimodal variant optimized for real-time image understanding, generation, and lightning-quick reasoning. - [Gemini 3.1 Flash Image Preview (Nano Banana 2)](https://docs.siray.ai/model-apis/google/gemini-3.1-flash-image-preview.md): Gemini 3.1 Flash Image Preview is Google's fast image generation model codenamed Nano Banana 2. Create high-quality images rapidly with Google's latest lightweight generation tech. - [Gemini 3.1 Pro Preview](https://docs.siray.ai/model-apis/google/gemini-3.1-pro-preview.md): Gemini 3.1 Pro Preview is Google's cutting-edge multimodal AI model with advanced reasoning capabilities. Experience next-generation AI performance across text, image, and code tasks. - [Gemini Embedding 001](https://docs.siray.ai/model-apis/google/gemini-embedding-001.md): Gemini Embedding 001 is a powerful text embedding model designed to transform content into dense vectors for semantic search, retrieval, and similarity tasks. - [Nano Banana 2 i2i](https://docs.siray.ai/model-apis/google/nano-banana-2-i2i.md): Nano Banana 2 i2i is Google's latest lightweight image-to-image model with enhanced transformation capabilities. Edit and restyle images quickly with improved quality and efficiency. - [Nano Banana 2 t2i](https://docs.siray.ai/model-apis/google/nano-banana-2-t2i.md): Nano Banana 2 t2i is Google's latest lightweight text-to-image model with improved generation quality. Create images from text prompts quickly with enhanced detail and coherence. - [Nano Banana i2i](https://docs.siray.ai/model-apis/google/nano-banana-i2i.md): Nano Banana i2i is Google's foundational lightweight image-to-image model. Transform images quickly with efficient processing ideal for high-volume or real-time applications. - [Nano Banana Pro i2i](https://docs.siray.ai/model-apis/google/nano-banana-pro-i2i.md): Nano Banana Pro i2i is Google's professional-grade lightweight image-to-image model. Transform images with enhanced quality output while maintaining fast processing speeds. - [Nano Banana Pro t2i](https://docs.siray.ai/model-apis/google/nano-banana-pro-t2i.md): Nano Banana Pro t2i is Google's professional-grade lightweight text-to-image model. Generate high-quality images from text prompts with improved detail while staying efficient. - [Nano Banana t2i](https://docs.siray.ai/model-apis/google/nano-banana-t2i.md): Nano Banana t2i is Google's foundational lightweight text-to-image model. Generate images from text prompts with ultra-fast speed and minimal resource consumption. - [Veo 3.1 i2v](https://docs.siray.ai/model-apis/google/veo-3.1-i2v.md): Google DeepMind's Veo 3.1 i2v is a premier Image-to-Video model, animating static images into high-quality, 1080p video with native audio and strong character consistency. - [Veo 3.1 t2v](https://docs.siray.ai/model-apis/google/veo-3.1-t2v.md): Google DeepMind's Veo 3.1 t2v is a state-of-the-art Text-to-Video model, generating high-fidelity, cinematic 1080p video with synchronized, native audio and dialogue. - [Kling 1.6 Pro i2v](https://docs.siray.ai/model-apis/kuaishou/kling-1.6-pro-i2v.md): Kuaishou's Kling 1.6 Pro i2v is a premium Image-to-Video model for 1080p generation, offering advanced first and last-frame control for precise transitions. - [Kling 1.6 Standard i2v](https://docs.siray.ai/model-apis/kuaishou/kling-1.6-standard-i2v.md): Kuaishou's Kling 1.6 Standard i2v is an accessible Image-to-Video model for animating stills into 720p clips, focusing on first-frame guidance and speed. - [Kling 1.6 Standard t2v](https://docs.siray.ai/model-apis/kuaishou/kling-1.6-standard-t2v.md): Kuaishou's Kling 1.6 Standard t2v is a core Text-to-Video model for fast, cost-effective 720p clip generation, ideal for quick content creation and prototyping. - [Kling 2.1 Master i2v](https://docs.siray.ai/model-apis/kuaishou/kling-2.1-master-i2v.md): Kuaishou's Kling 2.1 Master t2v is the top-tier Text-to-Video model for 1080p cinematic generation, with superior 3D motion, expression, and narrative control. - [Kling 2.1 Standard i2v](https://docs.siray.ai/model-apis/kuaishou/kling-2.1-standard-i2v.md): Kuaishou's Kling 2.1 Standard i2v is a base-tier Image-to-Video model offering high-quality 720p animation and strong character consistency at a competitive speed. - [Kling 2.6 Motion Control](https://docs.siray.ai/model-apis/kuaishou/kling-2.6-motion-control.md): Kling 2.6 Motion Control is an advanced AI video generation model focused on precise motion guidance, enabling creators to produce smooth, controllable, and cinematic animations at scale. - [Kling 2.6 Pro i2v](https://docs.siray.ai/model-apis/kuaishou/kling-2.6-pro-i2v.md): Kling 2.6 Pro I2V is an advanced image-to-video model that transforms static images into high-quality, cinematic videos with smooth motion and realistic details. - [Kling 2.6 Pro Motion Control](https://docs.siray.ai/model-apis/kuaishou/kling-2.6-pro-motion-control.md): Kling 2.6 Pro Motion Control is an advanced AI video generation model focused on precise motion guidance, enabling creators to produce smooth, controllable, and cinematic animations at scale. - [Kling 2.6 Pro t2v](https://docs.siray.ai/model-apis/kuaishou/kling-2.6-pro-t2v.md): Kling 2.6 is a breakthrough AI video model blending cinematic visuals, seamless motion, and built-in audio for text- or image-driven video generation with native sound. - [Kling 3 Motion Control](https://docs.siray.ai/model-apis/kuaishou/kling-3-motion-control.md): Kling 3 Motion Control is Kuaishou's advanced video model with precise motion guidance. Generate videos with controlled camera movements and subject motion using trajectory inputs. - [Kling 3.0 i2v](https://docs.siray.ai/model-apis/kuaishou/kling-3.0-i2v.md): Kling 3.0 i2v is Kuaishou's latest image-to-video model with unified multimodal capabilities. Transform images into dynamic videos with advanced motion and character consistency. - [Kling 3.0 t2v](https://docs.siray.ai/model-apis/kuaishou/kling-3.0-t2v.md): Kling 3.0 t2v is Kuaishou's latest text-to-video model with cutting-edge generation quality. Create stunning videos from text prompts with superior motion and visual fidelity. - [Midjourney Niji 6 t2i](https://docs.siray.ai/model-apis/midjourney/midjourney-niji-6-t2i.md): Midjourney Niji 6 t2i is Midjourney's anime-specialized model optimized for Japanese art styles. Generate stunning anime, manga, and illustration artwork with authentic aesthetics. All Midjourney models generate four images per request. - [Midjourney Niji 7 t2i](https://docs.siray.ai/model-apis/midjourney/midjourney-niji-7-t2i.md): Midjourney Niji 7 t2i is the latest anime-focused model with enhanced character and style capabilities. Create next-level anime artwork with improved detail and expression. All Midjourney models generate four images per request. - [Midjourney V6 t2i](https://docs.siray.ai/model-apis/midjourney/midjourney-v6-t2i.md): Midjourney V6 t2i is Midjourney's powerful text-to-image model with exceptional artistic quality. Generate stunning visuals from text prompts with industry-leading aesthetic appeal. All Midjourney models generate four images per request. - [Midjourney V6.1 t2i](https://docs.siray.ai/model-apis/midjourney/midjourney-v6.1-t2i.md): Midjourney V6.1 t2i is an enhanced version with improved coherence and detail. Generate refined visuals with better prompt adherence and upgraded artistic capabilities. All Midjourney models generate four images per request. - [Midjourney V7 t2i](https://docs.siray.ai/model-apis/midjourney/midjourney-v7-t2i.md): Midjourney V7 t2i is Midjourney's latest flagship text-to-image model. Experience cutting-edge image generation with breakthrough quality, realism, and creative capabilities. All Midjourney models generate four images per request. - [MiniMax M2.5](https://docs.siray.ai/model-apis/minimax/minimax-m2.5.md): MiniMax M2.5 is MiniMax's advanced large language model with strong multimodal and conversational abilities. Deliver intelligent responses with natural, engaging interaction quality. - [Kimi K2.5](https://docs.siray.ai/model-apis/moonshotai/kimi-k2.5.md): Kimi K2.5 is a high-performance large language model optimized for long-context reasoning, precise instruction following, and stable multilingual output across complex real-world tasks. - [GPT 4.1](https://docs.siray.ai/model-apis/openai/gpt-4.1.md): GPT 4.1 is the latest flagship LLM from OpenAI, offering massive context capacity, improved reasoning and coding, and robust instruction-following for modern AI applications. - [GPT 4.1 Mini](https://docs.siray.ai/model-apis/openai/gpt-4.1-mini.md): GPT-4.1 Mini is a lightweight yet capable language model that balances reasoning quality, fast response times, and affordability for everyday AI workloads. - [GPT 4.1 Nano](https://docs.siray.ai/model-apis/openai/gpt-4.1-nano.md): GPT-4.1 Nano is a compact and efficient language model optimized for low latency, high throughput, and cost-effective AI deployments at scale. - [GPT 4o](https://docs.siray.ai/model-apis/openai/gpt-4o.md): OpenAI's GPT-4o is their flagship Multimodal LLM that natively processes text, image, and audio inputs for superior real-time interaction, coding, and complex reasoning. - [GPT 4o mini](https://docs.siray.ai/model-apis/openai/gpt-4o-mini.md): OpenAI's GPT-4o mini is their highly efficient Multimodal LLM with 128K context, optimized for high-speed, low-cost retrieval, general tasks, and image comprehension. - [GPT 5](https://docs.siray.ai/model-apis/openai/gpt-5.md): OpenAI's GPT-5 is the premier Reasoning LLM optimized for deep, step-by-step thinking, complex multi-step planning, advanced coding, and superior enterprise logic. - [GPT 5 Chat](https://docs.siray.ai/model-apis/openai/gpt-5-chat.md): GPT-5 Chat is a conversation-optimized language model built for natural dialogue, contextual understanding, and smooth multi-turn interactions. - [GPT 5 CodeX](https://docs.siray.ai/model-apis/openai/gpt-5-codex.md): GPT-5 Codex is a specialized version of the GPT-5 AI model that is fine-tuned for software engineering and coding tasks. - [GPT 5 Nano](https://docs.siray.ai/model-apis/openai/gpt-5-nano.md): GPT-5 Nano is a compact language model optimized for ultra-fast responses and low cost, ideal for high-volume and latency-sensitive AI workloads. - [GPT 5.1](https://docs.siray.ai/model-apis/openai/gpt-5.1.md): GPT 5.1 is a general-purpose advanced language model that delivers high accuracy, stronger reasoning, and improved stability for complex tasks across business, research, and creative workflows. - [GPT 5.1 Chat](https://docs.siray.ai/model-apis/openai/gpt-5.1-chat.md): GPT 5.1 Chat is optimized for natural conversations, delivering fast responses, high contextual awareness, and reliable dialogue performance for interactive experiences. - [GPT 5.1 CodeX](https://docs.siray.ai/model-apis/openai/gpt-5.1-codex.md): GPT 5.1 CodeX is an advanced coding-optimized model designed for deep understanding of software tasks, offering strong debugging, structured generation, and multi-language coding support. - [GPT 5.1 CodeX Mini](https://docs.siray.ai/model-apis/openai/gpt-5.1-codex-mini.md): GPT 5.1 CodeX Mini is a lightweight, fast coding model built for quick responses, low compute cost, and efficient generation suited for everyday programming tasks. - [GPT 5.2](https://docs.siray.ai/model-apis/openai/gpt-5.2.md): GPT 5.2 is an advanced language model designed for high-precision reasoning, rapid response times, and scalable production workloads. It delivers stronger accuracy across coding, writing, and workflow automation. - [GPT 5.2 Chat](https://docs.siray.ai/model-apis/openai/gpt-5.2-chat.md): GPT 5.2 Chat is a conversational AI model optimized for natural dialogue, fast responses, and reliable context handling across extended multi-turn conversations. - [GPT 5.2 CodeX](https://docs.siray.ai/model-apis/openai/gpt-5.2-codex.md): GPT 5.2 CodeX is OpenAI's specialized coding model built for advanced software development. Excel at code generation, debugging, and complex programming tasks with cutting-edge performance. - [GPT 5.3 CodeX](https://docs.siray.ai/model-apis/openai/gpt-5.3-codex.md): GPT 5.3 CodeX is OpenAI's specialized coding model built for advanced software development. Excel at code generation, debugging, and complex programming tasks with cutting-edge performance. - [GPT 5.4](https://docs.siray.ai/model-apis/openai/gpt-5.4.md): GPT 5.4 is OpenAI's latest flagship model with breakthrough reasoning and enhanced capabilities. Experience the most advanced AI performance with improved accuracy and intelligence. - [GPT Image 1.5 i2i High](https://docs.siray.ai/model-apis/openai/gpt-image-1.5-i2i-high.md): GPT Image 1.5 is OpenAI’s newest flagship image model powering the latest ChatGPT Images. It delivers significantly faster image generation with stronger instruction following, more precise edits that preserve original details, more believable transformations, and improved rendering of dense or smal… - [GPT Image 1.5 i2i Low](https://docs.siray.ai/model-apis/openai/gpt-image-1.5-i2i-low.md): GPT Image 1.5 is OpenAI’s newest flagship image model powering the latest ChatGPT Images. It delivers significantly faster image generation with stronger instruction following, more precise edits that preserve original details, more believable transformations, and improved rendering of dense or smal… - [GPT Image 1.5 i2i Medium](https://docs.siray.ai/model-apis/openai/gpt-image-1.5-i2i-medium.md): GPT Image 1.5 is OpenAI’s newest flagship image model powering the latest ChatGPT Images. It delivers significantly faster image generation with stronger instruction following, more precise edits that preserve original details, more believable transformations, and improved rendering of dense or smal… - [GPT Image 1.5 ref2i High](https://docs.siray.ai/model-apis/openai/gpt-image-1.5-ref2i-high.md): GPT Image 1.5 is OpenAI’s newest flagship image model powering the latest ChatGPT Images. It delivers significantly faster image generation with stronger instruction following, more precise edits that preserve original details, more believable transformations, and improved rendering of dense or smal… - [GPT Image 1.5 ref2i Low](https://docs.siray.ai/model-apis/openai/gpt-image-1.5-ref2i-low.md): GPT Image 1.5 is OpenAI’s newest flagship image model powering the latest ChatGPT Images. It delivers significantly faster image generation with stronger instruction following, more precise edits that preserve original details, more believable transformations, and improved rendering of dense or smal… - [GPT Image 1.5 ref2i Medium](https://docs.siray.ai/model-apis/openai/gpt-image-1.5-ref2i-medium.md): GPT Image 1.5 is OpenAI’s newest flagship image model powering the latest ChatGPT Images. It delivers significantly faster image generation with stronger instruction following, more precise edits that preserve original details, more believable transformations, and improved rendering of dense or smal… - [GPT Image 1.5 t2i High](https://docs.siray.ai/model-apis/openai/gpt-image-1.5-t2i-high.md): GPT Image 1.5 is OpenAI’s newest flagship image model powering the latest ChatGPT Images. It delivers significantly faster image generation with stronger instruction following, more precise edits that preserve original details, more believable transformations, and improved rendering of dense or smal… - [GPT Image 1.5 t2i Low](https://docs.siray.ai/model-apis/openai/gpt-image-1.5-t2i-low.md): GPT Image 1.5 is OpenAI’s newest flagship image model powering the latest ChatGPT Images. It delivers significantly faster image generation with stronger instruction following, more precise edits that preserve original details, more believable transformations, and improved rendering of dense or smal… - [GPT Image 1.5 t2i Medium](https://docs.siray.ai/model-apis/openai/gpt-image-1.5-t2i-medium.md): GPT Image 1.5 is OpenAI’s newest flagship image model powering the latest ChatGPT Images. It delivers significantly faster image generation with stronger instruction following, more precise edits that preserve original details, more believable transformations, and improved rendering of dense or smal… - [GPT oss 120b](https://docs.siray.ai/model-apis/openai/gpt-oss-120b.md): GPT OSS 120B is a large-scale open-source language model with 120B parameters, delivering strong reasoning, generation quality, and full deployment flexibility. - [GPT oss 20b](https://docs.siray.ai/model-apis/openai/gpt-oss-20b.md): OpenAI's GPT oss 20B is an Open-Weight MoE LLM designed for on-device use, offering strong reasoning, function calling, and agentic capabilities under the Apache 2.0 license. - [o1](https://docs.siray.ai/model-apis/openai/o1.md): OpenAI's GPT o1 is a cutting-edge Reasoning LLM that uses extensive internal 'thinking' to achieve state-of-the-art performance in complex logic, science, and agent orchestration. - [o3](https://docs.siray.ai/model-apis/openai/o3.md): GPT-O3 is a powerful reasoning-centric AI model built for deep analysis, complex decision making, and high-accuracy enterprise workloads. - [Sora 2 i2v](https://docs.siray.ai/model-apis/openai/sora-2-i2v.md): OpenAI's Sora 2 i2v is an Image-to-Video model that animates stills into vertical 720x1280 clips with synchronized audio, ideal for fast, social-first video production. - [Sora 2 t2v](https://docs.siray.ai/model-apis/openai/sora-2-t2v.md): OpenAI's Sora 2 t2v is a Text-to-Video model that quickly generates vertical 720x1280 clips with synchronized audio, optimized for social media and rapid prototyping. - [Text Embedding 3 Large](https://docs.siray.ai/model-apis/openai/text-embedding-3-large.md): OpenAI Text Embedding 3 Large is a high-quality embedding model offering deeper semantic representation for complex text understanding and enterprise-grade retrieval tasks. - [Text Embedding 3 Small](https://docs.siray.ai/model-apis/openai/text-embedding-3-small.md): OpenAI Text Embedding 3 Small is a lightweight, cost-efficient embedding model designed for fast semantic understanding, similarity search, and text clustering at scale. - [PixVerse C1 i2v](https://docs.siray.ai/model-apis/pixverse/pixverse-c1-i2v.md): PixVerse C1 i2v is PixVerse's character-focused image-to-video model. Animate character images into dynamic videos while maintaining consistent identity and natural movement. - [PixVerse C1 t2v](https://docs.siray.ai/model-apis/pixverse/pixverse-c1-t2v.md): PixVerse C1 t2v is PixVerse's character-focused text-to-video model. Generate character-consistent videos from text prompts with enhanced identity preservation and expressive motion. - [PixVerse V5.6 i2v](https://docs.siray.ai/model-apis/pixverse/pixverse-v5.6-i2v.md): PixVerse V5.6 i2v is PixVerse's latest image-to-video model with enhanced motion and quality. Transform static images into dynamic videos with smooth animation and visual fidelity. - [PixVerse V5.6 t2v](https://docs.siray.ai/model-apis/pixverse/pixverse-v5.6-t2v.md): PixVerse V5.6 t2v is PixVerse's latest text-to-video model with improved generation quality. Create stunning videos from text prompts with enhanced motion and visual coherence. - [PixVerse V6 i2v](https://docs.siray.ai/model-apis/pixverse/pixverse-v6-i2v.md): PixVerse V6 i2v is PixVerse's latest image-to-video model with enhanced motion and quality. Transform static images into dynamic videos with smooth animation and visual fidelity. - [PixVerse V6 t2v](https://docs.siray.ai/model-apis/pixverse/pixverse-v6-t2v.md): PixVerse V6 t2v is PixVerse's latest text-to-video model with improved generation quality. Create stunning videos from text prompts with enhanced motion and visual coherence. - [Agnes 1.5 Pro](https://docs.siray.ai/model-apis/sapiens-ai/agnes-1.5-pro.md): Agnes-1.5-Pro is a large-scale text foundation model built with tens of billions of parameters, delivering strong capabilities in natural language understanding and generation. It demonstrates excellent performance across complex semantic modeling, multi-turn dialogue, and reasoning tasks. Through c… - [Hunyuan Image 3 Instruct i2i](https://docs.siray.ai/model-apis/tencent/hunyuan-image-3-instruct-i2i.md): Hunyuan Image 3 Instruct i2i is Tencent's instruction-tuned image-to-image model for precise visual transformations. Edit and restyle existing images with natural language control. - [Hunyuan Image 3 Instruct t2i](https://docs.siray.ai/model-apis/tencent/hunyuan-image-3-instruct-t2i.md): Hunyuan Image 3 Instruct t2i is Tencent's advanced text-to-image model with strong instruction-following capabilities. Generate highly detailed visuals from natural language prompts with ease. - [Hunyuan3d V2.5 Rapid image-to-3d](https://docs.siray.ai/model-apis/tencent/hunyuan3d-v2.5-rapid-image-to-3d.md): Hunyuan3d V2.5 Rapid is Tencent's fast image-to-3D model for quick 3D asset generation. Transform 2D images into 3D models rapidly with optimized speed and solid quality output. - [Hunyuan3d V2.5 Rapid text-to-3d](https://docs.siray.ai/model-apis/tencent/hunyuan3d-v2.5-rapid-text-to-3d.md): Hunyuan3d V2.5 Rapid text-to-3d is Tencent's fast text-to-3D model. Generate 3D models directly from text descriptions with rapid processing and reliable quality output. - [Tripo3D V2.5 image-to-3d](https://docs.siray.ai/model-apis/tripo/tripo3d-v2.5-image-to-3d.md): Tripo3D V2.5 image-to-3d is Tripo AI's advanced model for converting 2D images into 3D models. Generate high-quality 3D assets from single images with impressive detail and accuracy. - [Vidu Q2-pro-fast i2v](https://docs.siray.ai/model-apis/vidu/vidu-q2-pro-fast-i2v.md): Vidu Q2-pro-fast i2v is Vidu AI's fast image-to-video model. Generate consistent videos using reference images to maintain subject identity and style throughout. - [Vidu Q2-pro i2v](https://docs.siray.ai/model-apis/vidu/vidu-q2-pro-i2v.md): Vidu Q2-pro i2v is Vidu AI's professional image-to-video model. Transform static images into high-quality dynamic videos with exceptional motion and visual fidelity. - [Vidu Q2-pro se2v](https://docs.siray.ai/model-apis/vidu/vidu-q2-pro-se2v.md): Vidu Q2-pro se2v is Vidu AI's professional start-end-to-video model. Generate high-quality video transitions between two keyframes with superior motion and detail. - [Vidu Q2 t2v](https://docs.siray.ai/model-apis/vidu/vidu-q2-t2v.md): Vidu Q2 t2v is Vidu AI's standard text-to-video model. Generate quality videos directly from text prompts with reliable performance and balanced cost efficiency. - [Vidu Q3-pro i2v](https://docs.siray.ai/model-apis/vidu/vidu-q3-pro-i2v.md): Vidu Q3-pro i2v is Vidu AI's latest image-to-video model with enhanced quality and motion fidelity. Transform static images into dynamic, cinematic video sequences. - [Vidu Q3-pro t2v](https://docs.siray.ai/model-apis/vidu/vidu-q3-pro-t2v.md): Vidu Q3-pro t2v is Vidu AI's latest professional text-to-video model. Generate high-quality videos directly from text prompts with superior visual coherence and motion. - [Grok 4](https://docs.siray.ai/model-apis/x-ai/grok-4.md): xAI's Grok 4 is their most advanced LLM, utilizing axiom-based first-principles reasoning for superior performance in complex logic, advanced coding, and multimodal tasks. - [Grok 4 Fast Non Reasoning](https://docs.siray.ai/model-apis/x-ai/grok-4-fast-non-reasoning.md): xAI's Grok 4 Fast Non-Reasoning is a cost-efficient LLM with a 2M context, optimized for high-speed, low-latency, straightforward queries, and rapid search synthesis. - [Grok 4 Fast Reasoning](https://docs.siray.ai/model-apis/x-ai/grok-4-fast-reasoning.md): xAI's Grok 4 Fast Reasoning is a cost-efficient LLM with a 2M context, offering deep, step-by-step reasoning and native tool use for complex problem-solving. - [Grok 4.1 Fast Non Reasoning](https://docs.siray.ai/model-apis/x-ai/grok-4.1-fast-non-reasoning.md): xAI's speed demon variant, delivering instant responses for casual chat, content creation, and simple queries without visible reasoning steps. - [Grok 4.1 Fast Reasoning](https://docs.siray.ai/model-apis/x-ai/grok-4.1-fast-reasoning.md): xAI's cutting-edge AI model launched in 2025, optimized for lightning-fast logical analysis, multi-step problem solving, and advanced reasoning tasks with top-tier accuracy. - [Grok Imagine Image i2i](https://docs.siray.ai/model-apis/x-ai/grok-imagine-image-i2i.md): Grok Imagine Image i2i is xAI's image-to-image model for creative image transformation. Edit and restyle images with Grok's distinctive AI capabilities and unique visual approach. - [Grok Imagine Image t2i](https://docs.siray.ai/model-apis/x-ai/grok-imagine-image-t2i.md): Grok Imagine Image t2i is xAI's text-to-image model for creative image generation. Create unique visuals from text prompts with Grok's distinctive AI style and artistic flair. - [Grok Imagine Video Extension](https://docs.siray.ai/model-apis/x-ai/grok-imagine-video-extension.md): Grok Imagine Video Extension is xAI's video extension model for seamlessly extending existing videos. Expand video duration with coherent continuation powered by Grok AI technology. - [Grok Imagine Video i2v](https://docs.siray.ai/model-apis/x-ai/grok-imagine-video-i2v.md): Grok Imagine Video i2v is xAI's image-to-video model for dynamic video generation. Transform static images into engaging video sequences with Grok's creative AI capabilities. - [Grok Imagine Video t2v](https://docs.siray.ai/model-apis/x-ai/grok-imagine-video-t2v.md): Grok Imagine Video t2v is xAI's text-to-video model for creative video generation. Create dynamic videos from text prompts with Grok's unique AI personality and visual style. - [MiMo V2 Flash](https://docs.siray.ai/model-apis/xiaomi/mimo-v2-flash.md): MiMo V2 Flash is a high-speed multimodal AI model built for rapid reasoning across text and visuals, delivering low latency and efficient inference for real-time applications. - [GLM 4.6V Flash](https://docs.siray.ai/model-apis/z-ai/glm-4.6v-flash.md): GLM 4.6V Flash is a high-speed multimodal model designed for instant reasoning, image understanding, and lightweight workloads. It delivers fast responses while keeping accuracy and context handling reliable. - [GLM 4.7](https://docs.siray.ai/model-apis/z-ai/glm-4.7.md): GLM 4.7 is a powerful multimodal large language model designed for fast reasoning, high-quality generation, and reliable enterprise-level AI applications. - [GLM 5](https://docs.siray.ai/model-apis/z-ai/glm-5.md): GLM 5 is Zhipu AI's flagship large language model with advanced reasoning and multilingual capabilities. Excel at complex tasks with state-of-the-art Chinese and English understanding. - [Quickstart](https://docs.siray.ai/quickstart.md): Siray is the first enterprise-grade model library covering the full spectrum of AI models (LLMs, Vision, OCR, Video Generation etc.). - [Automatic Top-up](https://docs.siray.ai/resources/automatic-top-up.md): Enabling Auto Top-Up ensures your service never stops by automatically adding funds when your account balance falls below a threshold you set. - [FAQs](https://docs.siray.ai/resources/faq.md): Before contacting support, please check our Frequently Asked Questions (FAQs) below to quickly find the answers you need. - [Payment Methods](https://docs.siray.ai/resources/payment-methods.md): Siray uses Stripe for all payment processing. ## OpenAPI Specs - [veo-3.1-lite-t2v](https://docs.siray.ai/api-reference/openapi-spec/veo-3.1-lite-t2v.json) - [veo-3.1-lite-i2v](https://docs.siray.ai/api-reference/openapi-spec/veo-3.1-lite-i2v.json) - [seedance-2.0-t2v](https://docs.siray.ai/api-reference/openapi-spec/seedance-2.0-t2v.json) - [seedance-2.0-i2v](https://docs.siray.ai/api-reference/openapi-spec/seedance-2.0-i2v.json) - [seedance-2.0-fast-t2v](https://docs.siray.ai/api-reference/openapi-spec/seedance-2.0-fast-t2v.json) - [seedance-2.0-fast-i2v](https://docs.siray.ai/api-reference/openapi-spec/seedance-2.0-fast-i2v.json) - [pixverse-c1-t2v](https://docs.siray.ai/api-reference/openapi-spec/pixverse-c1-t2v.json) - [pixverse-c1-i2v](https://docs.siray.ai/api-reference/openapi-spec/pixverse-c1-i2v.json) - [nano-banana-pro-t2i](https://docs.siray.ai/api-reference/openapi-spec/nano-banana-pro-t2i.json) - [nano-banana-pro-i2i](https://docs.siray.ai/api-reference/openapi-spec/nano-banana-pro-i2i.json) - [nano-banana-2-t2i](https://docs.siray.ai/api-reference/openapi-spec/nano-banana-2-t2i.json) - [nano-banana-2-i2i](https://docs.siray.ai/api-reference/openapi-spec/nano-banana-2-i2i.json) - [nano-banana-i2i](https://docs.siray.ai/api-reference/openapi-spec/nano-banana-i2i.json) - [nano-banana-t2i](https://docs.siray.ai/api-reference/openapi-spec/nano-banana-t2i.json) - [midjourney-v7-t2i](https://docs.siray.ai/api-reference/openapi-spec/midjourney-v7-t2i.json) - [midjourney-v6.1-t2i](https://docs.siray.ai/api-reference/openapi-spec/midjourney-v6.1-t2i.json) - [midjourney-v6-t2i](https://docs.siray.ai/api-reference/openapi-spec/midjourney-v6-t2i.json) - [midjourney-niji-7-t2i](https://docs.siray.ai/api-reference/openapi-spec/midjourney-niji-7-t2i.json) - [midjourney-niji-6-t2i](https://docs.siray.ai/api-reference/openapi-spec/midjourney-niji-6-t2i.json) - [grok-imagine-video-extension](https://docs.siray.ai/api-reference/openapi-spec/grok-imagine-video-extension.json) - [pixverse-v5.6-t2v](https://docs.siray.ai/api-reference/openapi-spec/pixverse-v5.6-t2v.json) - [pixverse-v5.6-i2v](https://docs.siray.ai/api-reference/openapi-spec/pixverse-v5.6-i2v.json) - [pixverse-v6-t2v](https://docs.siray.ai/api-reference/openapi-spec/pixverse-v6-t2v.json) - [pixverse-v6-i2v](https://docs.siray.ai/api-reference/openapi-spec/pixverse-v6-i2v.json) - [claude-sonnet-4.6](https://docs.siray.ai/api-reference/openapi-spec/claude-sonnet-4.6.json) - [claude-sonnet-4.5](https://docs.siray.ai/api-reference/openapi-spec/claude-sonnet-4.5.json) - [claude-opus-4.6](https://docs.siray.ai/api-reference/openapi-spec/claude-opus-4.6.json) - [agnes-1.5-pro](https://docs.siray.ai/api-reference/openapi-spec/agnes-1.5-pro.json) - [vidu-q3-pro-i2v](https://docs.siray.ai/api-reference/openapi-spec/vidu-q3-pro-i2v.json) - [grok-imagine-video-t2v](https://docs.siray.ai/api-reference/openapi-spec/grok-imagine-video-t2v.json) - [grok-imagine-video-i2v](https://docs.siray.ai/api-reference/openapi-spec/grok-imagine-video-i2v.json) - [grok-imagine-image-t2i](https://docs.siray.ai/api-reference/openapi-spec/grok-imagine-image-t2i.json) - [grok-imagine-image-i2i](https://docs.siray.ai/api-reference/openapi-spec/grok-imagine-image-i2i.json) - [gpt-5.4](https://docs.siray.ai/api-reference/openapi-spec/gpt-5.4.json) - [kling-3-motion-control](https://docs.siray.ai/api-reference/openapi-spec/kling-3-motion-control.json) - [flux-1.1-pro-t2i-test](https://docs.siray.ai/api-reference/openapi-spec/flux-1.1-pro-t2i-test.json) - [gemini-3.1-pro-preview-test](https://docs.siray.ai/api-reference/openapi-spec/gemini-3.1-pro-preview-test.json) - [flux-1.1-pro-ultra-t2i](https://docs.siray.ai/api-reference/openapi-spec/flux-1.1-pro-ultra-t2i.json) - [flux-1.1-pro-ultra-i2i](https://docs.siray.ai/api-reference/openapi-spec/flux-1.1-pro-ultra-i2i.json) - [flux-1.1-pro-t2i](https://docs.siray.ai/api-reference/openapi-spec/flux-1.1-pro-t2i.json) - [flux-1.1-pro-i2i](https://docs.siray.ai/api-reference/openapi-spec/flux-1.1-pro-i2i.json) - [hunyuan3d-v2.5-rapid-text-to-3d](https://docs.siray.ai/api-reference/openapi-spec/hunyuan3d-v2.5-rapid-text-to-3d.json) - [hunyuan3d-v2.5-rapid-image-to-3d](https://docs.siray.ai/api-reference/openapi-spec/hunyuan3d-v2.5-rapid-image-to-3d.json) - [tripo3d-v2.5-image-to-3d](https://docs.siray.ai/api-reference/openapi-spec/tripo3d-v2.5-image-to-3d.json) - [task-status](https://docs.siray.ai/api-reference/openapi-spec/task-status.json) - [kling-3.0-t2v](https://docs.siray.ai/api-reference/openapi-spec/kling-3.0-t2v.json) - [kling-3.0-i2v](https://docs.siray.ai/api-reference/openapi-spec/kling-3.0-i2v.json) - [gemini-3.1-flash-image-preview](https://docs.siray.ai/api-reference/openapi-spec/gemini-3.1-flash-image-preview.json) - [qwen-3.5-plus](https://docs.siray.ai/api-reference/openapi-spec/qwen-3.5-plus.json) - [gemini-3.1-pro-preview](https://docs.siray.ai/api-reference/openapi-spec/gemini-3.1-pro-preview.json) - [claude-sonnet-4.6-long](https://docs.siray.ai/api-reference/openapi-spec/claude-sonnet-4.6-long.json) - [claude-sonnet-4.5-long](https://docs.siray.ai/api-reference/openapi-spec/claude-sonnet-4.5-long.json) - [claude-sonnet-4-5-20250929](https://docs.siray.ai/api-reference/openapi-spec/claude-sonnet-4-5-20250929.json) - [gpt-5.3-codex](https://docs.siray.ai/api-reference/openapi-spec/gpt-5.3-codex.json) - [gpt-5.2-codex](https://docs.siray.ai/api-reference/openapi-spec/gpt-5.2-codex.json) - [minimax-m2.5](https://docs.siray.ai/api-reference/openapi-spec/minimax-m2.5.json) - [glm-5](https://docs.siray.ai/api-reference/openapi-spec/glm-5.json) - [kling-3.0-omni-i2v](https://docs.siray.ai/api-reference/openapi-spec/kling-3.0-omni-i2v.json) - [claude-opus-4.6-long](https://docs.siray.ai/api-reference/openapi-spec/claude-opus-4.6-long.json) - [kling-2.6-motion-control](https://docs.siray.ai/api-reference/openapi-spec/kling-2.6-motion-control.json) - [kling-2.6-pro-motion-control](https://docs.siray.ai/api-reference/openapi-spec/kling-2.6-pro-motion-control.json) - [seedance-1.0-pro-fast-t2v](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.0-pro-fast-t2v.json) - [seedance-1.5-pro-i2v](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.5-pro-i2v.json) - [seedance-1.0-pro-t2v](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.0-pro-t2v.json) - [seedance-1.0-pro-i2v](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.0-pro-i2v.json) - [seedance-1.5-pro-se2v](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.5-pro-se2v.json) - [seedance-1.5-pro-t2v](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.5-pro-t2v.json) - [seedance-1.0-lite-t2v](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.0-lite-t2v.json) - [seedance-1.0-lite-i2v](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.0-lite-i2v.json) - [sora-2-pro-t2v](https://docs.siray.ai/api-reference/openapi-spec/sora-2-pro-t2v.json) - [sora-2-t2v](https://docs.siray.ai/api-reference/openapi-spec/sora-2-t2v.json) - [sora-2-i2v](https://docs.siray.ai/api-reference/openapi-spec/sora-2-i2v.json) - [wan-2.6-t2v](https://docs.siray.ai/api-reference/openapi-spec/wan-2.6-t2v.json) - [wan-2.6-i2v](https://docs.siray.ai/api-reference/openapi-spec/wan-2.6-i2v.json) - [vidu-q2-pro-se2v](https://docs.siray.ai/api-reference/openapi-spec/vidu-q2-pro-se2v.json) - [vidu-q2-pro-fast-se2v](https://docs.siray.ai/api-reference/openapi-spec/vidu-q2-pro-fast-se2v.json) - [vidu-q2-t2v](https://docs.siray.ai/api-reference/openapi-spec/vidu-q2-t2v.json) - [vidu-q2-pro-i2v](https://docs.siray.ai/api-reference/openapi-spec/vidu-q2-pro-i2v.json) - [vidu-q3-pro-t2v](https://docs.siray.ai/api-reference/openapi-spec/vidu-q3-pro-t2v.json) - [vidu-q2-pro-fast-i2v](https://docs.siray.ai/api-reference/openapi-spec/vidu-q2-pro-fast-i2v.json) - [vidu-q2-pro-fast-ref2v](https://docs.siray.ai/api-reference/openapi-spec/vidu-q2-pro-fast-ref2v.json) - [hunyuan-image-3-instruct-i2i](https://docs.siray.ai/api-reference/openapi-spec/hunyuan-image-3-instruct-i2i.json) - [hunyuan-image-3-instruct-t2i](https://docs.siray.ai/api-reference/openapi-spec/hunyuan-image-3-instruct-t2i.json) - [z-image-turbo-t2i](https://docs.siray.ai/api-reference/openapi-spec/z-image-turbo-t2i.json) - [wan-2.6-t2v-720p](https://docs.siray.ai/api-reference/openapi-spec/wan-2.6-t2v-720p.json) - [wan-2.6-t2v-1080p](https://docs.siray.ai/api-reference/openapi-spec/wan-2.6-t2v-1080p.json) - [wan-2.6-i2v-1080p](https://docs.siray.ai/api-reference/openapi-spec/wan-2.6-i2v-1080p.json) - [vidu-q2-turbo-i2v-1080p](https://docs.siray.ai/api-reference/openapi-spec/vidu-q2-turbo-i2v-1080p.json) - [vidu-q2-t2v-720p](https://docs.siray.ai/api-reference/openapi-spec/vidu-q2-t2v-720p.json) - [vidu-q2-t2v-1080p](https://docs.siray.ai/api-reference/openapi-spec/vidu-q2-t2v-1080p.json) - [vidu-q2-ref2v-1080p](https://docs.siray.ai/api-reference/openapi-spec/vidu-q2-ref2v-1080p.json) - [vidu-q2-pro-se2v-720p](https://docs.siray.ai/api-reference/openapi-spec/vidu-q2-pro-se2v-720p.json) - [vidu-q2-pro-i2v-720p](https://docs.siray.ai/api-reference/openapi-spec/vidu-q2-pro-i2v-720p.json) - [vidu-1.5-t2v-720p](https://docs.siray.ai/api-reference/openapi-spec/vidu-1.5-t2v-720p.json) - [vidu-1.5-t2v-1080p](https://docs.siray.ai/api-reference/openapi-spec/vidu-1.5-t2v-1080p.json) - [vidu-1.5-se2v-720p](https://docs.siray.ai/api-reference/openapi-spec/vidu-1.5-se2v-720p.json) - [vidu-1.5-se2v-1080p](https://docs.siray.ai/api-reference/openapi-spec/vidu-1.5-se2v-1080p.json) - [vidu-1.5-ref2v-720p](https://docs.siray.ai/api-reference/openapi-spec/vidu-1.5-ref2v-720p.json) - [vidu-1.5-ref2v-1080p](https://docs.siray.ai/api-reference/openapi-spec/vidu-1.5-ref2v-1080p.json) - [vidu-1.5-i2v-720p](https://docs.siray.ai/api-reference/openapi-spec/vidu-1.5-i2v-720p.json) - [vidu-1.5-i2v-1080p](https://docs.siray.ai/api-reference/openapi-spec/vidu-1.5-i2v-1080p.json) - [veo-3.1-t2v](https://docs.siray.ai/api-reference/openapi-spec/veo-3.1-t2v.json) - [veo-3.1-i2v](https://docs.siray.ai/api-reference/openapi-spec/veo-3.1-i2v.json) - [veo-3-fast-t2v](https://docs.siray.ai/api-reference/openapi-spec/veo-3-fast-t2v.json) - [veo-3-fast-i2v](https://docs.siray.ai/api-reference/openapi-spec/veo-3-fast-i2v.json) - [text-embedding-3-small](https://docs.siray.ai/api-reference/openapi-spec/text-embedding-3-small.json) - [text-embedding-3-large](https://docs.siray.ai/api-reference/openapi-spec/text-embedding-3-large.json) - [sora-2-pro-t2v-std](https://docs.siray.ai/api-reference/openapi-spec/sora-2-pro-t2v-std.json) - [sora-2-pro-i2v-std](https://docs.siray.ai/api-reference/openapi-spec/sora-2-pro-i2v-std.json) - [sora-2-pro-i2v-hd](https://docs.siray.ai/api-reference/openapi-spec/sora-2-pro-i2v-hd.json) - [seedream-4.5-t2i](https://docs.siray.ai/api-reference/openapi-spec/seedream-4.5-t2i.json) - [seedream-4.5-ref2i](https://docs.siray.ai/api-reference/openapi-spec/seedream-4.5-ref2i.json) - [seedream-4.5-i2i](https://docs.siray.ai/api-reference/openapi-spec/seedream-4.5-i2i.json) - [seedream-4.0-t2i](https://docs.siray.ai/api-reference/openapi-spec/seedream-4.0-t2i.json) - [seedream-4.0-ref2i](https://docs.siray.ai/api-reference/openapi-spec/seedream-4.0-ref2i.json) - [seedream-4.0-i2i](https://docs.siray.ai/api-reference/openapi-spec/seedream-4.0-i2i.json) - [seedance-1.5-pro-t2v-720p](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.5-pro-t2v-720p.json) - [seedance-1.5-pro-se2v-720p](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.5-pro-se2v-720p.json) - [seedance-1.5-pro-i2v-720p](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.5-pro-i2v-720p.json) - [seedance-1.0-pro-t2v-720p](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.0-pro-t2v-720p.json) - [seedance-1.0-pro-t2v-1080p](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.0-pro-t2v-1080p.json) - [seedance-1.0-pro-i2v-720p](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.0-pro-i2v-720p.json) - [seedance-1.0-pro-i2v-1080p](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.0-pro-i2v-1080p.json) - [seedance-1.0-pro-fast-t2v-720p](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.0-pro-fast-t2v-720p.json) - [seedance-1.0-pro-fast-t2v-1080p](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.0-pro-fast-t2v-1080p.json) - [seedance-1.0-lite-t2v-720p](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.0-lite-t2v-720p.json) - [seedance-1.0-lite-i2v-720p](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.0-lite-i2v-720p.json) - [seed-1.6](https://docs.siray.ai/api-reference/openapi-spec/seed-1.6.json) - [seed-1.6-vision-128k](https://docs.siray.ai/api-reference/openapi-spec/seed-1.6-vision-128k.json) - [qwen3-omni-flash](https://docs.siray.ai/api-reference/openapi-spec/qwen3-omni-flash.json) - [qwen3-max-256k](https://docs.siray.ai/api-reference/openapi-spec/qwen3-max-256k.json) - [qwen3-coder-480b-a35b-instruct](https://docs.siray.ai/api-reference/openapi-spec/qwen3-coder-480b-a35b-instruct.json) - [qwen-plus](https://docs.siray.ai/api-reference/openapi-spec/qwen-plus.json) - [qwen-long](https://docs.siray.ai/api-reference/openapi-spec/qwen-long.json) - [o3](https://docs.siray.ai/api-reference/openapi-spec/o3.json) - [o1](https://docs.siray.ai/api-reference/openapi-spec/o1.json) - [mimo-v2-flash](https://docs.siray.ai/api-reference/openapi-spec/mimo-v2-flash.json) - [kling-2.6-pro-t2v](https://docs.siray.ai/api-reference/openapi-spec/kling-2.6-pro-t2v.json) - [kling-2.6-pro-i2v](https://docs.siray.ai/api-reference/openapi-spec/kling-2.6-pro-i2v.json) - [kling-2.1-standard-i2v](https://docs.siray.ai/api-reference/openapi-spec/kling-2.1-standard-i2v.json) - [kling-2.1-master-i2v](https://docs.siray.ai/api-reference/openapi-spec/kling-2.1-master-i2v.json) - [kling-1.6-standard-t2v](https://docs.siray.ai/api-reference/openapi-spec/kling-1.6-standard-t2v.json) - [kling-1.6-standard-i2v](https://docs.siray.ai/api-reference/openapi-spec/kling-1.6-standard-i2v.json) - [kling-1.6-pro-i2v](https://docs.siray.ai/api-reference/openapi-spec/kling-1.6-pro-i2v.json) - [kimi-k2.5](https://docs.siray.ai/api-reference/openapi-spec/kimi-k2.5.json) - [grok-4](https://docs.siray.ai/api-reference/openapi-spec/grok-4.json) - [grok-4.1-fast-reasoning](https://docs.siray.ai/api-reference/openapi-spec/grok-4.1-fast-reasoning.json) - [grok-4.1-fast-non-reasoning](https://docs.siray.ai/api-reference/openapi-spec/grok-4.1-fast-non-reasoning.json) - [grok-4-fast-reasoning](https://docs.siray.ai/api-reference/openapi-spec/grok-4-fast-reasoning.json) - [grok-4-fast-non-reasoning](https://docs.siray.ai/api-reference/openapi-spec/grok-4-fast-non-reasoning.json) - [gpt-oss-20b](https://docs.siray.ai/api-reference/openapi-spec/gpt-oss-20b.json) - [gpt-oss-120b](https://docs.siray.ai/api-reference/openapi-spec/gpt-oss-120b.json) - [gpt-5](https://docs.siray.ai/api-reference/openapi-spec/gpt-5.json) - [gpt-5.2](https://docs.siray.ai/api-reference/openapi-spec/gpt-5.2.json) - [gpt-5.2-chat](https://docs.siray.ai/api-reference/openapi-spec/gpt-5.2-chat.json) - [gpt-5.1](https://docs.siray.ai/api-reference/openapi-spec/gpt-5.1.json) - [gpt-5.1-codex](https://docs.siray.ai/api-reference/openapi-spec/gpt-5.1-codex.json) - [gpt-5.1-codex-mini](https://docs.siray.ai/api-reference/openapi-spec/gpt-5.1-codex-mini.json) - [gpt-5.1-chat](https://docs.siray.ai/api-reference/openapi-spec/gpt-5.1-chat.json) - [gpt-5-nano](https://docs.siray.ai/api-reference/openapi-spec/gpt-5-nano.json) - [gpt-5-codex](https://docs.siray.ai/api-reference/openapi-spec/gpt-5-codex.json) - [gpt-5-chat](https://docs.siray.ai/api-reference/openapi-spec/gpt-5-chat.json) - [gpt-4o](https://docs.siray.ai/api-reference/openapi-spec/gpt-4o.json) - [gpt-4o-mini](https://docs.siray.ai/api-reference/openapi-spec/gpt-4o-mini.json) - [gpt-4.1](https://docs.siray.ai/api-reference/openapi-spec/gpt-4.1.json) - [gpt-4.1-nano](https://docs.siray.ai/api-reference/openapi-spec/gpt-4.1-nano.json) - [gpt-4.1-mini](https://docs.siray.ai/api-reference/openapi-spec/gpt-4.1-mini.json) - [glm-4.7](https://docs.siray.ai/api-reference/openapi-spec/glm-4.7.json) - [glm-4.6v-flash](https://docs.siray.ai/api-reference/openapi-spec/glm-4.6v-flash.json) - [gemini-embedding-001](https://docs.siray.ai/api-reference/openapi-spec/gemini-embedding-001.json) - [gemini-3-pro-preview](https://docs.siray.ai/api-reference/openapi-spec/gemini-3-pro-preview.json) - [gemini-3-pro-image-preview](https://docs.siray.ai/api-reference/openapi-spec/gemini-3-pro-image-preview.json) - [gemini-3-flash-preview](https://docs.siray.ai/api-reference/openapi-spec/gemini-3-flash-preview.json) - [gemini-2.5-pro](https://docs.siray.ai/api-reference/openapi-spec/gemini-2.5-pro.json) - [gemini-2.5-flash](https://docs.siray.ai/api-reference/openapi-spec/gemini-2.5-flash.json) - [gemini-2.5-flash-lite](https://docs.siray.ai/api-reference/openapi-spec/gemini-2.5-flash-lite.json) - [gemini-2.5-flash-image-i2i](https://docs.siray.ai/api-reference/openapi-spec/gemini-2.5-flash-image-i2i.json) - [gemini-2.0-flash](https://docs.siray.ai/api-reference/openapi-spec/gemini-2.0-flash.json) - [flux-kontext-t2i-pro](https://docs.siray.ai/api-reference/openapi-spec/flux-kontext-t2i-pro.json) - [flux-kontext-t2i-max](https://docs.siray.ai/api-reference/openapi-spec/flux-kontext-t2i-max.json) - [flux-kontext-i2i-pro](https://docs.siray.ai/api-reference/openapi-spec/flux-kontext-i2i-pro.json) - [flux-kontext-i2i-max](https://docs.siray.ai/api-reference/openapi-spec/flux-kontext-i2i-max.json) - [deepseek-v3.2](https://docs.siray.ai/api-reference/openapi-spec/deepseek-v3.2.json) - [deepseek-v3.2-exp](https://docs.siray.ai/api-reference/openapi-spec/deepseek-v3.2-exp.json) - [deepseek-v3.1](https://docs.siray.ai/api-reference/openapi-spec/deepseek-v3.1.json) - [deepseek-v3.1-terminus](https://docs.siray.ai/api-reference/openapi-spec/deepseek-v3.1-terminus.json) - [deepseek-r1](https://docs.siray.ai/api-reference/openapi-spec/deepseek-r1.json) - [claude-opus-4.5](https://docs.siray.ai/api-reference/openapi-spec/claude-opus-4.5.json) - [claude-opus-4.5-thinking](https://docs.siray.ai/api-reference/openapi-spec/claude-opus-4.5-thinking.json) - [claude-opus-4.1-thinking](https://docs.siray.ai/api-reference/openapi-spec/claude-opus-4.1-thinking.json) - [claude-haiku-4.5](https://docs.siray.ai/api-reference/openapi-spec/claude-haiku-4.5.json) - [claude-haiku-4-5-20251001](https://docs.siray.ai/api-reference/openapi-spec/claude-haiku-4-5-20251001.json) - [sora-2-t2v-720x1280](https://docs.siray.ai/api-reference/openapi-spec/sora-2-t2v-720x1280.json) - [sora-2-pro-t2v-720x1280](https://docs.siray.ai/api-reference/openapi-spec/sora-2-pro-t2v-720x1280.json) - [sora-2-pro-t2v-1024x1792](https://docs.siray.ai/api-reference/openapi-spec/sora-2-pro-t2v-1024x1792.json) - [sora-2-pro-i2v-720x1280](https://docs.siray.ai/api-reference/openapi-spec/sora-2-pro-i2v-720x1280.json) - [sora-2-pro-i2v-1024x1792](https://docs.siray.ai/api-reference/openapi-spec/sora-2-pro-i2v-1024x1792.json) - [sora-2-i2v-720x1280](https://docs.siray.ai/api-reference/openapi-spec/sora-2-i2v-720x1280.json) - [seedance-1.5-pro-t2v-480p](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.5-pro-t2v-480p.json) - [seedance-1.5-pro-se2v-480p](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.5-pro-se2v-480p.json) - [seedance-1.5-pro-i2v-480p](https://docs.siray.ai/api-reference/openapi-spec/seedance-1.5-pro-i2v-480p.json) - [gpt-image-1.5-t2i-medium](https://docs.siray.ai/api-reference/openapi-spec/gpt-image-1.5-t2i-medium.json) - [gpt-image-1.5-t2i-low](https://docs.siray.ai/api-reference/openapi-spec/gpt-image-1.5-t2i-low.json) - [gpt-image-1.5-t2i-high](https://docs.siray.ai/api-reference/openapi-spec/gpt-image-1.5-t2i-high.json) - [gpt-image-1.5-ref2i-medium](https://docs.siray.ai/api-reference/openapi-spec/gpt-image-1.5-ref2i-medium.json) - [gpt-image-1.5-ref2i-low](https://docs.siray.ai/api-reference/openapi-spec/gpt-image-1.5-ref2i-low.json) - [gpt-image-1.5-ref2i-high](https://docs.siray.ai/api-reference/openapi-spec/gpt-image-1.5-ref2i-high.json) - [gpt-image-1.5-i2i-medium](https://docs.siray.ai/api-reference/openapi-spec/gpt-image-1.5-i2i-medium.json) - [gpt-image-1.5-i2i-low](https://docs.siray.ai/api-reference/openapi-spec/gpt-image-1.5-i2i-low.json) - [gpt-image-1.5-i2i-high](https://docs.siray.ai/api-reference/openapi-spec/gpt-image-1.5-i2i-high.json) - [gemini-3-pro-image-preview-nano-banana-pro](https://docs.siray.ai/api-reference/openapi-spec/gemini-3-pro-image-preview-nano-banana-pro.json) - [gemini-2.5-flash-image-t2i](https://docs.siray.ai/api-reference/openapi-spec/gemini-2.5-flash-image-t2i.json) - [gemini-2.5-flash-image-t2i-nano-banana](https://docs.siray.ai/api-reference/openapi-spec/gemini-2.5-flash-image-t2i-nano-banana.json) - [gemini-2.5-flash-image-i2i-nano-banana](https://docs.siray.ai/api-reference/openapi-spec/gemini-2.5-flash-image-i2i-nano-banana.json) - [claude-sonnet-4.5-prompts-200k-tokens](https://docs.siray.ai/api-reference/openapi-spec/claude-sonnet-4.5-prompts-200k-tokens.json) - [seededit-3.0-i2i](https://docs.siray.ai/api-reference/openapi-spec/seededit-3.0-i2i.json) - [gpt-o1](https://docs.siray.ai/api-reference/openapi-spec/gpt-o1.json) - [gpt-o1-mini](https://docs.siray.ai/api-reference/openapi-spec/gpt-o1-mini.json) - [doubao-seed-1.6](https://docs.siray.ai/api-reference/openapi-spec/doubao-seed-1.6.json) - [doubao-seed-1.6-vision-32k](https://docs.siray.ai/api-reference/openapi-spec/doubao-seed-1.6-vision-32k.json) - [doubao-seed-1.6-vision-256k](https://docs.siray.ai/api-reference/openapi-spec/doubao-seed-1.6-vision-256k.json) - [doubao-seed-1.6-vision-128k](https://docs.siray.ai/api-reference/openapi-spec/doubao-seed-1.6-vision-128k.json) - [doubao-seed-1.6-thinking](https://docs.siray.ai/api-reference/openapi-spec/doubao-seed-1.6-thinking.json) - [doubao-seed-1.6-flash](https://docs.siray.ai/api-reference/openapi-spec/doubao-seed-1.6-flash.json) - [doubao-1.5-thinking-vision-pro](https://docs.siray.ai/api-reference/openapi-spec/doubao-1.5-thinking-vision-pro.json) - [doubao-1.5-pro-32k](https://docs.siray.ai/api-reference/openapi-spec/doubao-1.5-pro-32k.json) - [doubao-1.5-pro-256k](https://docs.siray.ai/api-reference/openapi-spec/doubao-1.5-pro-256k.json) - [doubao-1.5-lite-32k](https://docs.siray.ai/api-reference/openapi-spec/doubao-1.5-lite-32k.json) - [claude-sonnet-4.5-prompts-200k-tokens-1](https://docs.siray.ai/api-reference/openapi-spec/claude-sonnet-4.5-prompts-200k-tokens-1.json) - [openapi](https://docs.siray.ai/api-reference/openapi.json) ## Optional - [Documentation](https://docs.siray.ai/) - [Dev Support](mailto:support@siray.ai) - [Discord Community](https://discord.gg/CmSbUzPSVP)