# The Multivac — Evaluation Report

**Evaluation ID:** EVAL-20260207-142101
**Date:** Feb 24, 2026
**Category:** code
**Question ID:** CODE-007

---

## Question

Explain what this code does in plain English. Then identify any bugs or design issues.

```python
def f(x, n=3, m=None):
    m = m or {}
    if n == 0:
        return [[]]
    if x in m:
        return m[x]
    r = []
    for i in range(len(x)):
        for p in f(x[:i] + x[i+1:], n-1, m):
            r.append([x[i]] + p)
    m[x] = r
    return r

def g(s, k):
    from collections import Counter
    c = Counter(s)
    h = []
    import heapq
    for ch, cnt in c.items():
        heapq.heappush(h, (-cnt, ch))
    r = []
    while h and len(r) < k:
        cnt, ch = heapq.heappop(h)
        r.append(ch)
    return ''.join(r)
```

---

## Winner

**GLM-4-7** (Zhipu)
- Winner Score: 9.45
- Matrix Average: 8.34
- Total Judgments: 90

---

## Rankings

| Rank | Model | Provider | Avg Score | Judgments |
|------|-------|----------|-----------|----------|
| 1 | GLM-4-7 | Zhipu | 9.45 | 7 |
| 2 | GPT-5.2-Codex | OpenAI | 9.23 | 8 |
| 3 | Grok Code Fast | xAI | 8.96 | 7 |
| 4 | DeepSeek V3.2 | DeepSeek | 8.95 | 7 |
| 5 | Gemini 3 Flash Preview | Google | 8.88 | 8 |
| 6 | Claude Opus 4.5 | Anthropic | 8.81 | 7 |
| 7 | Gemini 3 Pro Preview | Google | 8.51 | 9 |
| 8 | Claude Sonnet 4.5 | Anthropic | 8.11 | 7 |
| 9 | Grok 3 (Direct) | xAI | 8.04 | 7 |
| 10 | MiniMax M2 | MiniMax | 4.45 | 5 |

---

## 10×10 Judgment Matrix

Rows = Judge, Columns = Respondent. Self-judgments excluded (—).

| Judge ↓ / Resp → | Grok Code Fast | Gemini 3 | GPT-5.2-Codex | DeepSeek V3.2 | Claude Opus | Gemini 3 | Claude Sonnet | MiniMax M2 | GLM-4-7 | Grok 3 |
|---|---|---|---|---|---|---|---|---|---|---|
| Grok Code Fast | — | 9.2 | 9.2 | 9.6 | 9.4 | 9.8 | 8.9 | 2.0 | 9.8 | 9.4 |
| Gemini 3 | 0.0 | — | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| GPT-5.2-Codex | 8.0 | 7.0 | — | 7.8 | 7.8 | 6.2 | 5.3 | 0.0 | 0.0 | 5.5 |
| DeepSeek V3.2 | 9.1 | 8.8 | 9.6 | — | 8.7 | 9.6 | 9.2 | 7.5 | 9.6 | 9.2 |
| Claude Opus | 9.0 | 8.8 | 9.2 | 9.2 | — | 9.2 | 7.8 | 1.4 | 9.2 | 7.2 |
| Gemini 3 | 9.6 | 9.6 | 9.8 | 9.6 | 10.0 | — | 8.6 | 0.0 | 9.8 | 8.3 |
| Claude Sonnet | 9.3 | 9.3 | 9.3 | 9.0 | 8.8 | 9.3 | — | 5.5 | 9.3 | 9.3 |
| MiniMax M2 | 9.2 | 6.2 | 8.6 | 8.8 | 8.6 | 9.0 | 8.6 | — | 10.0 | 7.3 |
| GLM-4-7 | 0.0 | 9.3 | 9.8 | 0.0 | 0.0 | 9.2 | 0.0 | 0.0 | — | 0.0 |
| Grok 3 | 8.7 | 8.4 | 8.3 | 8.7 | 8.4 | 8.8 | 8.4 | 5.9 | 8.4 | — |

---

## Methodology

- **10×10 Blind Peer Matrix:** All models answer the same question, then all models judge all responses.
- **5 Criteria:** Correctness, completeness, clarity, depth, usefulness (each scored 1–10).
- **Self-judgments excluded:** Models do not judge their own responses.
- **Weighted Score:** Composite of all 5 criteria.

---

## Citation

The Multivac (2026). Blind Peer Evaluation: CODE-007. app.themultivac.com

## License

Open data. Free to use, share, and build upon. Please cite The Multivac when using this data.

Download raw JSON: https://app.themultivac.com/api/evaluations/EVAL-20260207-142101/results
Full dataset: https://app.themultivac.com/dashboard/export
