Skip to main content

Playground

The flagship sparse MoE LLM with an ultra-long 256K context. It is designed for deep analysis and reasoning over massive documents and codebases.

Keyfeatures

  • 256K Context: Ideal for tasks requiring full-document understanding and information synthesis across large texts.
  • Deep Analysis: Excels at complex information extraction and query-answering over large, structured datasets.