Qwen2.5 Chat is here

Qwen2.5 Chat is here

Qwen2.5-Max: Exploring the Intelligence of Large-scale MoE Model

Qwen2.5-Max, a large-scale MoE model that has been pretrained on over 20 trillion tokens and further post-trained with curated Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) methodologies

Qwen Chat here - chat.qwenlm.ai

Qwen2.5-Max, a large MoE LLM pretrained on massive data and post-trained with curated SFT and RLHF recipes. It achieves competitive performance against the top-tier models, and outcompetes DeepSeek V3 in benchmarks like Arena Hard, LiveBench, LiveCodeBench, GPQA-Diamond.

📖 Blog: qwenlm.github.io/blog/qwen2.5-max

💬 Qwen Chat: chat.qwenlm.ai (choose Qwen2.5-Max as the model)

⚙️ API: alibabacloud.com/help/en/model-studio/getti.. (check the code snippet in the blog)

💻 HF Demo: huggingface.co/spaces/Qwen/Qwen2.5-Max-Demo