Playground
A cost-efficient LLM with a 2M context. It is optimized for high-speed, low-latency, straightforward queries, and rapid search synthesis.Key Features
- Max Speed: Delivers the fastest, most direct output by skipping the internal reasoning steps.
- Rapid Retrieval: Ideal for quick information extraction, summarization, and search-augmented generation.
- 2M Context: Retains the massive context window for fast processing of very large inputs.
