# The Multivac — Evaluation Report

**Evaluation ID:** EVAL-20260402-234741
**Date:** Apr 02, 2026
**Category:** communication
**Question ID:** COMM-024

---

## Question

Rewrite these technical feature descriptions as customer-facing value propositions: (1) 'We use a distributed event-driven architecture with CQRS.' (2) 'Our model achieves 0.94 F1 score on the benchmark.' (3) 'Built on Kubernetes with auto-scaling and 99.99% SLA.' (4) 'End-to-end encryption with AES-256 and RSA key exchange.' (5) 'Sub-100ms p99 latency with edge caching.' For each, the customer should understand WHY they should care, not HOW it works.

---

## Winner

**Claude Sonnet 4.6** (openrouter)
- Winner Score: 9.38
- Matrix Average: 9.08
- Total Judgments: 90

---

## Rankings

| Rank | Model | Provider | Avg Score | Judgments |
|------|-------|----------|-----------|----------|
| 1 | Claude Sonnet 4.6 | openrouter | 9.38 | 9 |
| 2 | Claude Opus 4.6 | openrouter | 9.36 | 9 |
| 3 | GPT-OSS-120B | OpenAI | 9.18 | 9 |
| 4 | Grok 4.20 | openrouter | 9.17 | 9 |
| 5 | Mistral Small Creative | Mistral | 9.14 | 9 |
| 6 | Gemini 3.1 Pro | openrouter | 9.11 | 9 |
| 7 | MiMo-V2-Flash | Xiaomi | 9.11 | 9 |
| 8 | GPT-5.4 | openrouter | 9.02 | 9 |
| 9 | DeepSeek V4 | openrouter | 9.02 | 9 |
| 10 | Seed 1.6 Flash | openrouter | 8.28 | 9 |

---

## 10×10 Judgment Matrix

Rows = Judge, Columns = Respondent. Self-judgments excluded (—).

| Judge ↓ / Resp → | DeepSeek V4 | Claude Opus | GPT-OSS-120B | GPT-5.4 | Claude Sonnet | Gemini 3.1 Pro | Grok 4.20 | MiMo-V2-Flash | Mistral Small | Seed 1.6 Flash |
|---|---|---|---|---|---|---|---|---|---|---|
| DeepSeek V4 | — | 9.8 | 9.8 | 9.6 | 9.8 | 9.8 | 9.6 | 9.0 | 9.8 | 9.8 |
| Claude Opus | 8.6 | — | 9.0 | 9.0 | 9.6 | 9.2 | 9.0 | 9.0 | 9.6 | 5.8 |
| GPT-OSS-120B | 8.4 | 9.2 | — | 8.4 | 8.8 | 8.6 | 8.8 | 8.8 | 8.8 | 8.8 |
| GPT-5.4 | 8.8 | 8.6 | 8.3 | — | 9.3 | 7.7 | 9.0 | 8.6 | 8.6 | 7.5 |
| Claude Sonnet | 8.9 | 9.6 | 9.2 | 8.8 | — | 9.2 | 9.2 | 9.3 | 9.2 | 8.3 |
| Gemini 3.1 Pro | 9.8 | 10.0 | 9.8 | 9.8 | 10.0 | — | 9.8 | 9.8 | 9.6 | 7.0 |
| Grok 4.20 | 9.0 | 9.0 | 8.8 | 8.4 | 9.0 | 9.0 | — | 9.0 | 9.0 | 8.4 |
| MiMo-V2-Flash | 9.0 | 9.3 | 9.3 | 8.8 | 9.3 | 9.8 | 8.6 | — | 8.6 | 9.0 |
| Mistral Small | 9.8 | 9.8 | 9.8 | 9.8 | 9.8 | 9.8 | 9.8 | 9.8 | — | 9.8 |
| Seed 1.6 Flash | 8.8 | 9.0 | 8.6 | 8.6 | 8.8 | 9.0 | 8.8 | 8.7 | 9.3 | — |

---

## Methodology

- **10×10 Blind Peer Matrix:** All models answer the same question, then all models judge all responses.
- **5 Criteria:** Correctness, completeness, clarity, depth, usefulness (each scored 1–10).
- **Self-judgments excluded:** Models do not judge their own responses.
- **Weighted Score:** Composite of all 5 criteria.

---

## Citation

The Multivac (2026). Blind Peer Evaluation: COMM-024. app.themultivac.com

## License

Open data. Free to use, share, and build upon. Please cite The Multivac when using this data.

Download raw JSON: https://app.themultivac.com/api/evaluations/EVAL-20260402-234741/results
Full dataset: https://app.themultivac.com/dashboard/export
