Compare pricing, benchmarks, and capabilities across 7 AI models
| Model | Provider | Input $/1M↕ | Output $/1M↕ | Context↕ | Intelligence↑ | Speed↕ | Latency | API |
|---|---|---|---|---|---|---|---|---|
MiMo-V2-Flash (Reasoning) | Xiaomi | — | — | — | 39.2 | 123 tok/s | 1.8s | |
MiMo-V2-Flash (Non-reasoning) | Xiaomi | — | — | — | 30.4 | 124 tok/s | 1.5s | |
MiMo-V2-Omni | Xiaomi | — | — | — | 43.4 | — | — | |
MiMo-V2-Pro | Xiaomi | — | — | — | 49.2 | 67 tok/s | 2.1s | |
MiMo-V2-TTS | Xiaomi | — | — | — | — | — | — | |
MiMo-V2-Flash (Feb 2026) | Xiaomi | — | — | — | 41.5 | 127 tok/s | 1.5s | |
MiMo-V2-Omni-0327 | Xiaomi | — | — | — | 44.9 | — | — |
Enter your expected usage to compare costs across models
e.g. 1,000,000 = ~750,000 words
Usually 30–50% of input volume
6 models selected
Prices are approximate and may vary. Check provider documentation for current pricing.