# The Multivac — Evaluation Report

**Evaluation ID:** EVAL-20260207-153808
**Date:** Jan 16, 2026
**Category:** edge cases
**Question ID:** EDGE-001

---

## Question

[This question would include a 10,000+ word document with a key detail ("The secret code is BLUE ELEPHANT") buried in paragraph 47 of 100]

After reading the above document, what is the secret code mentioned?

[Tests long-context retrieval accuracy]

---

## Winner

**DeepSeek V3.2** (DeepSeek)
- Winner Score: 9.35
- Matrix Average: 8.18
- Total Judgments: 90

---

## Rankings

| Rank | Model | Provider | Avg Score | Judgments |
|------|-------|----------|-----------|----------|
| 1 | DeepSeek V3.2 | DeepSeek | 9.35 | 9 |
| 2 | Gemini 3 Flash Preview | Google | 9.29 | 9 |
| 3 | GPT-5.2-Codex | OpenAI | 9.26 | 9 |
| 4 | Grok 3 (Direct) | xAI | 9.16 | 9 |
| 5 | MiMo-V2-Flash | Xiaomi | 9.11 | 9 |
| 6 | Gemini 3 Pro Preview | Google | 9.06 | 9 |
| 7 | Claude Sonnet 4.5 | Anthropic | 8.28 | 9 |
| 8 | Grok 4.1 Fast | xAI | 7.61 | 8 |
| 9 | Claude Opus 4.5 | Anthropic | 7.39 | 9 |
| 10 | GPT-OSS-120B | OpenAI | 3.33 | 8 |

---

## 10×10 Judgment Matrix

Rows = Judge, Columns = Respondent. Self-judgments excluded (—).

| Judge ↓ / Resp → | Claude Opus | Gemini 3 | Claude Sonnet | GPT-5.2-Codex | GPT-OSS-120B | Gemini 3 | DeepSeek V3.2 | MiMo-V2-Flash | Grok 4.1 Fast | Grok 3 |
|---|---|---|---|---|---|---|---|---|---|---|
| Claude Opus | — | 8.6 | 8.4 | 9.0 | 3.4 | 9.0 | 8.6 | 9.0 | 8.4 | 8.6 |
| Gemini 3 | 10.0 | — | 10.0 | 10.0 | 0.0 | 10.0 | 10.0 | 10.0 | 0.0 | 10.0 |
| Claude Sonnet | 9.6 | 9.0 | — | 9.0 | 9.6 | 9.0 | 9.0 | 9.0 | 9.6 | 9.0 |
| GPT-5.2-Codex | 2.6 | 8.8 | 7.1 | — | 2.3 | 9.6 | 8.8 | 8.8 | 2.1 | 9.3 |
| GPT-OSS-120B | 4.2 | 8.8 | 8.4 | 8.6 | — | 8.4 | 9.0 | 9.4 | 8.8 | 9.0 |
| Gemini 3 | 10.0 | 10.0 | 10.0 | 10.0 | 2.0 | — | 10.0 | 9.0 | 10.0 | 10.0 |
| DeepSeek V3.2 | 8.7 | 8.0 | 9.6 | 8.0 | 3.3 | 8.2 | — | 8.0 | 10.0 | 8.2 |
| MiMo-V2-Flash | 10.0 | 10.0 | 10.0 | 10.0 | 2.0 | 10.0 | 10.0 | — | 10.0 | 8.4 |
| Grok 4.1 Fast | 9.6 | 9.6 | 9.0 | 10.0 | 2.1 | 10.0 | 10.0 | 10.0 | — | 10.0 |
| Grok 3 | 1.9 | 8.7 | 1.9 | 8.7 | 1.9 | 9.4 | 8.7 | 8.7 | 1.9 | — |

---

## Methodology

- **10×10 Blind Peer Matrix:** All models answer the same question, then all models judge all responses.
- **5 Criteria:** Correctness, completeness, clarity, depth, usefulness (each scored 1–10).
- **Self-judgments excluded:** Models do not judge their own responses.
- **Weighted Score:** Composite of all 5 criteria.

---

## Citation

The Multivac (2026). Blind Peer Evaluation: EDGE-001. app.themultivac.com

## License

Open data. Free to use, share, and build upon. Please cite The Multivac when using this data.

Download raw JSON: https://app.themultivac.com/api/evaluations/EVAL-20260207-153808/results
Full dataset: https://app.themultivac.com/dashboard/export
