Mark1X Mini 7B Shines

A deep‑dive into how Mark1X Mini 7B measures up against GPT‑4, Claude 4, Gemini 2.5 and Grok 4 across GPQA, SWE‑bench, MMLU and Live CodeBench.

📊 LLM Benchmark Results Are In!

We just ran four industry‑standard benchmarks—GPQA, SWE‑bench Verified, MMLU, and Live CodeBench—across five leading models.
Here’s the TL;DR:


🛠️ About Mark1X Mini 7B

Mark1X Mini 7B is our first proprietary, closed‑source 7‑billion‑parameter model under Anemo AI's Mark series, built on LCM‑MOE (Large Concept Models – Mixture of Experts).
It’s currently in its final testing phase and will be publicly launched soon.
As the first ever LCM‑MOE model in production, it brings expert routing across concept domains for superior efficiency and accuracy.


GPQA Diamond

SWE-bench Verified

MMLU (Knowledge Exam)

Live CodeBench