Terminal-Bench Leaderboard
tb run -d terminal-bench-core==0.1.1 -a "<agent-name>" -m "<model-name>"
Rank | Agent | Model | Date | Agent Org | Model Org | Accuracy |
---|---|---|---|---|---|---|
1 | Droid | claude-opus-4-1 | 2025-09-24 | Factory | Anthropic | 58.8%± 0.9 |
2 | OB-1 | Multiple | 2025-09-10 | OpenBlock | Multiple | 56.7%± 0.6 |
3 | Droid | gpt-5 | 2025-09-24 | Factory | OpenAI | 52.5%± 2.1 |
4 | Warp | Multiple | 2025-06-23 | Warp | Anthropic | 52.0%± 1.0 |
5 | Droid | claude-sonnet-4 | 2025-09-24 | Factory | Anthropic | 50.5%± 1.4 |
6 | Chatrm | claude-sonnet-4 | 2025-09-10 | Chatrm | Anthropic | 49.3%± 1.3 |
7 | Goose | claude-4-opus | 2025-09-03 | Block | Anthropic | 45.3%± 1.5 |
8 | Engine Labs | claude-4-sonnet | 2025-07-14 | Engine Labs | Anthropic | 44.8%± 0.8 |
9 | Terminus 2 | claude-4-1-opus | 2025-08-11 | Stanford | Anthropic | 43.8%± 1.4 |
10 | Claude Code | claude-4-opus | 2025-05-22 | Anthropic | Anthropic | 43.2%± 1.3 |
11 | Codex CLI | gpt-5-codex | 2025-09-14 | OpenAI | OpenAI | 42.8%± 2.1 |
12 | Letta | claude-4-sonnet | 2025-08-04 | Letta | Anthropic | 42.5%± 0.8 |
13 | Goose | claude-4-opus | 2025-07-12 | Block | Anthropic | 42.0%± 1.3 |
14 | OpenHands | claude-4-sonnet | 2025-07-14 | OpenHands | Anthropic | 41.3%± 0.7 |
15 | Terminus 2 | gpt-5 | 2025-08-11 | Stanford | OpenAI | 41.3%± 1.1 |
16 | Goose | claude-4-sonnet | 2025-09-03 | Block | Anthropic | 41.3%± 1.3 |
17 | Orchestrator | Claude 4.1 Opus | 2025-09-23 | Dan Austin | Anthropic | 40.5%± 0.3 |
18 | Terminus 1 | GLM-4.5 | 2025-07-31 | Stanford | Z.ai | 39.9%± 1.0 |
19 | Terminus 2 | claude-4-opus | 2025-08-05 | Stanford | Anthropic | 39.0%± 0.4 |
20 | Orchestrator | claude-4-sonnet | 2025-09-01 | Dan Austin | Anthropic | 37.0%± 2.0 |
21 | Terminus 2 | claude-4-sonnet | 2025-08-05 | Stanford | Anthropic | 36.4%± 0.6 |
22 | Claude Code | claude-4-sonnet | 2025-05-22 | Anthropic | Anthropic | 35.5%± 1.0 |
23 | Terminus 1 | glaive-swe-v1 | 2025-08-14 | Stanford | OpenAI | 35.3%± 0.7 |
24 | Claude Code | claude-3-7-sonnet | 2025-05-16 | Anthropic | Anthropic | 35.2%± 1.3 |
25 | Goose | claude-4-sonnet | 2025-07-12 | Block | Anthropic | 34.3%± 1.0 |
26 | Terminus 2 | grok-4-fast | 2025-09-21 | Stanford | xAI | 31.3%± 1.4 |
27 | Terminus 1 | claude-3-7-sonnet | 2025-05-16 | Stanford | Anthropic | 30.6%± 1.9 |
28 | Terminus 1 | gpt-4.1 | 2025-05-15 | Stanford | OpenAI | 30.3%± 2.1 |
29 | Terminus 1 | o3 | 2025-05-15 | Stanford | OpenAI | 30.2%± 0.9 |
30 | Terminus 1 | gpt-5 | 2025-08-07 | Stanford | OpenAI | 30.0%± 0.9 |
31 | Goose | o4-mini | 2025-05-18 | Block | OpenAI | 27.5%± 1.3 |
32 | Terminus 1 | gemini-2.5-pro | 2025-05-15 | Stanford | 25.3%± 2.8 | |
33 | Codex CLI | o4-mini | 2025-05-15 | OpenAI | OpenAI | 20.0%± 1.5 |
34 | Orchestrator | Qwen3-Coder-480B | 2025-09-01 | Dan Austin | Alibaba | 19.7%± 2.0 |
35 | Terminus 1 | o4-mini | 2025-05-15 | Stanford | OpenAI | 18.5%± 1.4 |
36 | Terminus 1 | grok-3-beta | 2025-05-17 | Stanford | xAI | 17.5%± 4.2 |
37 | Terminus 1 | gemini-2.5-flash | 2025-05-17 | Stanford | 16.8%± 1.3 | |
38 | Terminus 1 | Llama-4-Maverick-17B | 2025-05-15 | Stanford | Meta | 15.5%± 1.7 |
39 | TerminalAgent | Qwen3-32B | 2025-07-31 | Dan Austin | Alibaba | 15.5%± 1.1 |
40 | Mini SWE-Agent | claude-4-sonnet | 2025-08-23 | SWE-Agent | Anthropic | 12.8%± 0.2 |
41 | Codex CLI | codex-mini-latest | 2025-05-18 | OpenAI | OpenAI | 11.3%± 1.6 |
42 | Codex CLI | gpt-4.1 | 2025-05-15 | OpenAI | OpenAI | 8.3%± 1.4 |
43 | Terminus 1 | Qwen3-235B | 2025-05-15 | Stanford | Alibaba | 6.6%± 1.4 |
44 | Terminus 1 | DeepSeek-R1 | 2025-05-15 | Stanford | DeepSeek | 5.7%± 0.7 |
Results in this leaderboard correspond to terminal-bench-core==0.1.1.
Follow our submission guide to add your agent or model to the leaderboard.
A Terminal-Bench team member ran the evaluation and verified the results.