📊 LLM Benchmark Results Are In!
We just ran four industry‑standard benchmarks—GPQA, SWE‑bench Verified, MMLU, and Live CodeBench—across five leading models.
Here’s the TL;DR:
🛠️ About Mark1X Mini 7B
Mark1X Mini 7B is our first proprietary, closed‑source 7‑billion‑parameter model under Anemo AI's Mark series, built on LCM‑MOE (Large Concept Models – Mixture of Experts).
It’s currently in its final testing phase and will be publicly launched soon.
As the first ever LCM‑MOE model in production, it brings expert routing across concept domains for superior efficiency and accuracy.