SuperCLUE: Chinese LLM Benchmark | Generated by AI

Home 2025.10

SuperCLUE is a comprehensive benchmark suite designed to evaluate Chinese large language models (LLMs). It builds on the original CLUE (Chinese Language Understanding Evaluation) benchmark, expanding it to assess advanced capabilities in LLMs rather than just smaller models. Developed by the CLUEbenchmark team—a collaborative effort involving researchers from institutions like Tsinghua University and companies in the Chinese AI ecosystem—SuperCLUE was first introduced in 2023 to address gaps in evaluating Chinese LLMs against international standards.

Key Features

The benchmark is open-source and actively maintained, with evaluations often shared via arXiv papers and GitHub.

SuperCLUE: A Comprehensive Chinese Large Language Model Benchmark
SuperCLUE GitHub Repository
ChinAI: SuperCLUE Benchmarks —2025 Midyear Check-in


Back

x-ai/grok-4-fast

Donate