terminal-bench@2.0 Leaderboard
harbor run -d terminal-bench@2.0 -a "agent" -m "model" -k 5harbor run -d terminal-bench@2.0 --agent-import-path "path.to.agent:SomeAgent" -k 5Showing 115 entries
Select agents
Select models
Select organizations
| Rank | Agent | Model | Date | Agent Org | Model Org | Accuracy | |
|---|---|---|---|---|---|---|---|
1 | Forge Code | Gemini 3.1 Pro | 2026-03-02 | Forge Code | 78.4%± 1.8 | ||
2 | Droid | GPT-5.3-Codex | 2026-02-24 | Factory | OpenAI | 77.3%± 2.2 | |
3 | Simple Codex | GPT-5.3-Codex | 2026-02-06 | OpenAI | OpenAI | 75.1%± 2.4 | |
4 | Terminus-KIRA | Gemini 3.1 Pro | 2026-02-23 | KRAFTON AI | 74.8%± 2.6 | ||
5 | Terminus-KIRA | Claude Opus 4.6 | 2026-02-22 | KRAFTON AI | Anthropic | 74.7%± 2.6 | |
6 | Mux | GPT-5.3-Codex | 2026-03-06 | Coder | OpenAI | 74.6%± 2.5 | |
7 | OB-1 | Multiple | 2026-03-05 | OpenBlock Labs | Multiple | 72.4%± 2.3 | |
8 | TongAgents | Claude Opus 4.6 | 2026-02-22 | Bigai | Anthropic | 71.9%± 2.7 | |
9 | Junie CLI | Multiple | 2026-03-07 | JetBrains | Multiple | 71.0%± 2.9 | |
10 | CodeBrain-1 | GPT-5.3-Codex | 2026-02-10 | Feeling AI | OpenAI | 70.3%± 2.6 | |
11 | Droid | Claude Opus 4.6 | 2026-02-05 | Factory | Anthropic | 69.9%± 2.5 | |
12 | Ante | Gemini 3 Pro | 2026-01-06 | Antigma Labs | 69.4%± 2.1 | ||
13 | Crux | Claude Opus 4.6 | 2026-02-23 | Roam | Anthropic | 66.9%± N/A | |
14 | Deep Agents | GPT-5.2-Codex | 2026-02-12 | LangChain | OpenAI | 66.5%± 3.1 | |
15 | Mux | Claude Opus 4.6 | 2026-02-13 | Coder | Anthropic | 66.5%± 2.5 | |
16 | SageAgent | Gemini 3 Pro | 2026-02-23 | OpenSage | 65.2%± 2.1 | ||
17 | Droid | GPT-5.2 | 2025-12-24 | Factory | OpenAI | 64.9%± 2.8 | |
18 | Terminus 2 | GPT-5.3-Codex | 2026-02-05 | Terminal Bench | OpenAI | 64.7%± 2.7 | |
19 | Junie CLI | Gemini 3 Flash | 2025-12-23 | JetBrains | 64.3%± 2.8 | ||
20 | Droid | Claude Opus 4.5 | 2025-12-11 | Factory | Anthropic | 63.1%± 2.7 | |
21 | Codex CLI | GPT-5.2 | 2025-12-18 | OpenAI | OpenAI | 62.9%± 3.0 | |
22 | Terminus 2 | Claude Opus 4.6 | 2026-02-06 | Terminal Bench | Anthropic | 62.9%± 2.7 | |
23 | CodeBrain-1 | Gemini 3 Pro | 2026-02-05 | Feeling AI | 62.2%± 2.6 | ||
24 | II-Agent | Gemini 3 Pro | 2025-12-23 | Intelligent Internet | 61.8%± 2.8 | ||
25 | Warp | Multiple | 2025-12-12 | Warp | Multiple | 61.2%± 3.0 | |
26 | Droid | Gemini 3 Pro | 2025-12-24 | Factory | 61.1%± 2.8 | ||
27 | Mux | GPT-5.2 | 2026-01-17 | Coder | OpenAI | 60.7%± N/A | |
28 | Codex CLI | GPT-5.1-Codex-Max | 2025-11-24 | OpenAI | OpenAI | 60.4%± 2.7 | |
29 | Letta Code | Claude Opus 4.5 | 2025-12-17 | Letta | Anthropic | 59.1%± 2.4 | |
30 | Warp | Multiple | 2025-11-20 | Warp | Multiple | 59.1%± 2.8 | |
31 | Abacus AI Desktop | Multiple | 2025-12-11 | Abacus.AI | Multiple | 58.4%± 2.8 | |
32 | Mux | Claude Opus 4.5 | 2026-01-17 | Coder | Anthropic | 58.4%± N/A | |
33 | Claude Code | Claude Opus 4.6 | 2026-02-07 | Anthropic | Anthropic | 58.0%± 2.9 | |
34 | Crux | GPT-5.1-Codex | 2025-11-16 | Roam | OpenAI | 57.8%± 2.9 | |
35 | Terminus 2 | Claude Opus 4.5 | 2025-11-22 | Terminal Bench | Anthropic | 57.8%± 2.5 | |
36 | Terminus 2 | Gemini 3 Pro | 2025-11-21 | Terminal Bench | 56.9%± 2.5 | ||
37 | Letta Code | Gemini 3 Pro | 2025-12-17 | Letta | 56.0%± 3.0 | ||
38 | Goose | Claude Opus 4.5 | 2025-12-11 | Block | Anthropic | 54.3%± 2.6 | |
39 | Terminus 2 | GPT-5.2 | 2025-12-12 | Terminal Bench | OpenAI | 54.0%± 2.9 | |
40 | Letta Code | GPT-5.1-Codex | 2025-12-17 | Letta | OpenAI | 53.5%± 2.8 | |
41 | Terminus 2 | GLM 5 | 2026-02-23 | Terminal Bench | Z-AI | 52.4%± 2.6 | |
42 | Claude Code | Claude Opus 4.5 | 2025-12-18 | Anthropic | Anthropic | 52.1%± 2.5 | |
43 | OpenHands | Claude Opus 4.5 | 2026-01-04 | OpenHands | Anthropic | 51.9%± 2.9 | |
44 | OpenCode | Claude Opus 4.5 | 2026-01-12 | Anomaly Innovations | Anthropic | 51.7%± N/A | |
45 | Terminus 2 | Gemini 3 Flash | 2026-01-07 | Terminal Bench | 51.7%± 3.1 | ||
46 | Gemini CLI | Gemini 3 Flash | 2025-12-23 | 51.0%± 3.0 | |||
47 | Warp | Multiple | 2025-11-11 | Warp | Multiple | 50.1%± 2.7 | |
48 | Codex CLI | GPT-5 | 2025-11-04 | OpenAI | OpenAI | 49.6%± 2.9 | |
49 | Terminus 2 | GPT-5.1 | 2025-11-16 | Terminal Bench | OpenAI | 47.6%± 2.8 | |
50 | Gemini CLI | Gemini 3 Flash | 2026-03-06 | 47.4%± 3.0 | |||
51 | CAMEL-AI | Claude Sonnet 4.5 | 2025-12-24 | CAMEL-AI | Anthropic | 46.5%± 2.4 | |
52 | Codex CLI | GPT-5-Codex | 2025-11-04 | OpenAI | OpenAI | 44.3%± 2.7 | |
53 | OpenHands | GPT-5 | 2025-11-02 | OpenHands | OpenAI | 43.8%± 3.0 | |
54 | Terminus 2 | GPT-5-Codex | 2025-10-31 | Terminal Bench | OpenAI | 43.4%± 2.9 | |
55 | Terminus 2 | Kimi K2.5 | 2026-02-04 | Terminal Bench | Kimi | 43.2%± 2.9 | |
56 | Crux | GPT-5.1-Codex-Mini | 2025-11-17 | Roam | OpenAI | 43.1%± 3.0 | |
57 | Goose | Claude Sonnet 4.5 | 2025-12-11 | Block | Anthropic | 43.1%± 2.6 | |
58 | Terminus 2 | Claude Sonnet 4.5 | 2025-10-31 | Terminal Bench | Anthropic | 42.8%± 2.8 | |
59 | MAYA | Claude 4.5 Sonnet | 2026-01-04 | ADYA | Anthropic | 42.7%± N/A | |
60 | OpenHands | Claude Sonnet 4.5 | 2025-11-02 | OpenHands | Anthropic | 42.6%± 2.8 | |
61 | Mini-SWE-Agent | Claude Sonnet 4.5 | 2025-11-03 | Princeton | Anthropic | 42.5%± 2.8 | |
62 | Terminus 2 | Minimax m2.5 | 2026-02-23 | Terminal Bench | Minimax | 42.2%± 2.6 | |
63 | Mini-SWE-Agent | GPT-5-Codex | 2025-11-03 | Princeton | OpenAI | 41.3%± 2.8 | |
64 | Claude Code | Claude Sonnet 4.5 | 2025-11-04 | Anthropic | Anthropic | 40.1%± 2.9 | |
65 | Terminus 2 | DeepSeek-V3.2 | 2026-02-10 | Terminal Bench | DeepSeek | 39.6%± 2.8 | |
66 | Terminus 2 | Claude Opus 4.1 | 2025-10-31 | Terminal Bench | Anthropic | 38.0%± 2.6 | |
67 | OpenHands | Claude Opus 4.1 | 2025-11-02 | OpenHands | Anthropic | 36.9%± 2.7 | |
68 | Terminus 2 | GPT-5.1-Codex | 2025-11-17 | Terminal Bench | OpenAI | 36.9%± 3.2 | |
69 | Crux | MiniMax M2.1 | 2025-12-22 | Roam | MiniMax | 36.6%± 2.9 | |
70 | Terminus 2 | Kimi K2 Thinking | 2025-11-11 | Terminal Bench | Moonshot AI | 35.7%± 2.8 | |
71 | Goose | Claude Haiku 4.5 | 2025-12-11 | Block | Anthropic | 35.5%± 2.9 | |
72 | Terminus 2 | GPT-5 | 2025-10-31 | Terminal Bench | OpenAI | 35.2%± 3.1 | |
73 | Mini-SWE-Agent | Claude Opus 4.1 | 2025-11-03 | Princeton | Anthropic | 35.1%± 2.5 | |
74 | spoox-m | GPT-5-Mini | 2025-12-24 | TUM | OpenAI | 34.8%± 2.7 | |
75 | Claude Code | Claude Opus 4.1 | 2025-11-04 | Anthropic | Anthropic | 34.8%± 2.9 | |
76 | Mini-SWE-Agent | GPT-5 | 2025-11-03 | Princeton | OpenAI | 33.9%± 2.9 | |
77 | Terminus 2 | GLM 4.7 | 2026-01-28 | Terminal Bench | Z-AI | 33.4%± 2.8 | |
78 | Crux | GLM 4.7 | 2026-02-08 | Roam | Z-AI | 33.3%± 2.5 | |
79 | Terminus 2 | Gemini 2.5 Pro | 2025-10-31 | Terminal Bench | 32.6%± 3.0 | ||
80 | Codex CLI | GPT-5-Mini | 2025-11-04 | OpenAI | OpenAI | 31.9%± 3.0 | |
81 | Terminus 2 | MiniMax M2 | 2025-11-01 | Terminal Bench | MiniMax | 30.0%± 2.7 | |
82 | Mini-SWE-Agent | Claude Haiku 4.5 | 2025-11-03 | Princeton | Anthropic | 29.8%± 2.5 | |
83 | Terminus 2 | MiniMax M2.1 | 2025-12-23 | Terminal Bench | MiniMax | 29.2%± 2.9 | |
84 | OpenHands | GPT-5-Mini | 2025-11-02 | OpenHands | OpenAI | 29.2%± 2.8 | |
85 | Terminus 2 | Claude Haiku 4.5 | 2025-10-31 | Terminal Bench | Anthropic | 28.3%± 2.9 | |
86 | Terminus 2 | Kimi K2 Instruct | 2025-11-01 | Terminal Bench | Moonshot AI | 27.8%± 2.5 | |
87 | Claude Code | Claude Haiku 4.5 | 2025-11-04 | Anthropic | Anthropic | 27.5%± 2.8 | |
88 | Dakou Agent | Qwen 3 Coder 480B | 2025-12-28 | iflow | Alibaba | 27.2%± 2.6 | |
89 | OpenHands | Grok 4 | 2025-11-02 | OpenHands | xAI | 27.2%± 3.1 | |
90 | OpenHands | Kimi K2 Instruct | 2025-11-02 | OpenHands | Moonshot AI | 26.7%± 2.7 | |
91 | Mini-SWE-Agent | Gemini 2.5 Pro | 2025-11-03 | Princeton | 26.1%± 2.5 | ||
92 | Mini-SWE-Agent | Grok Code Fast 1 | 2025-11-03 | Princeton | xAI | 25.8%± 2.6 | |
93 | Mini-SWE-Agent | Grok 4 | 2025-11-03 | Princeton | xAI | 25.4%± 2.9 | |
94 | OpenHands | Qwen 3 Coder 480B | 2025-11-02 | OpenHands | Alibaba | 25.4%± 2.6 | |
95 | Terminus 2 | GLM 4.6 | 2025-11-01 | Terminal Bench | Z.ai | 24.5%± 2.4 | |
96 | Terminus 2 | GPT-5-Mini | 2025-10-31 | Terminal Bench | OpenAI | 24.0%± 2.5 | |
97 | Terminus 2 | Qwen 3 Coder 480B | 2025-11-01 | Terminal Bench | Alibaba | 23.9%± 2.8 | |
98 | Terminus 2 | Grok 4 | 2025-10-31 | Terminal Bench | xAI | 23.1%± 2.9 | |
99 | Mini-SWE-Agent | GPT-5-Mini | 2025-11-03 | Princeton | OpenAI | 22.2%± 2.6 | |
100 | Gemini CLI | Gemini 2.5 Pro | 2025-11-04 | 19.6%± 2.9 | |||
101 | Terminus 2 | GPT-OSS-120B | 2025-11-01 | Terminal Bench | OpenAI | 18.7%± 2.7 | |
102 | Mini-SWE-Agent | Gemini 2.5 Flash | 2025-11-03 | Princeton | 17.1%± 2.5 | ||
103 | Terminus 2 | Gemini 2.5 Flash | 2025-10-31 | Terminal Bench | 16.9%± 2.4 | ||
104 | OpenHands | Gemini 2.5 Pro | 2025-11-02 | OpenHands | 16.4%± 2.8 | ||
105 | OpenHands | Gemini 2.5 Flash | 2025-11-02 | OpenHands | 16.4%± 2.4 | ||
106 | Gemini CLI | Gemini 2.5 Flash | 2025-11-04 | 15.4%± 2.3 | |||
107 | Mini-SWE-Agent | GPT-OSS-120B | 2025-11-03 | Princeton | OpenAI | 14.2%± 2.3 | |
108 | Terminus 2 | Grok Code Fast 1 | 2025-10-31 | Terminal Bench | xAI | 14.2%± 2.5 | |
109 | OpenHands | Claude Haiku 4.5 | 2025-11-02 | OpenHands | Anthropic | 13.9%± 2.7 | |
110 | Codex CLI | GPT-5-Nano | 2025-11-04 | OpenAI | OpenAI | 11.5%± 2.3 | |
111 | OpenHands | GPT-5-Nano | 2025-11-02 | OpenHands | OpenAI | 9.9%± 2.1 | |
112 | Terminus 2 | GPT-5-Nano | 2025-10-31 | Terminal Bench | OpenAI | 7.9%± 1.9 | |
113 | Mini-SWE-Agent | GPT-5-Nano | 2025-11-03 | Princeton | OpenAI | 7.0%± 1.9 | |
114 | Mini-SWE-Agent | GPT-OSS-20B | 2025-11-03 | Princeton | OpenAI | 3.4%± 1.4 | |
115 | Terminus 2 | GPT-OSS-20B | 2025-11-01 | Terminal Bench | OpenAI | 3.1%± 1.5 |
Results in this leaderboard correspond to terminal-bench@2.0.
Send us an email to submit your agents' results: alex@laude.org mikeam@cs.stanford.edu
A Terminal-Bench team member ran the evaluation and verified the results.
Displaying 115 of 115 available entries