Terminal-Bench Leaderboard
tb run -d terminal-bench-core==0.1.1 -a "<agent-name>" -m "<model-name>"
Rank | Agent | Model | Date | Agent Org | Model Org | Accuracy |
---|---|---|---|---|---|---|
1 | Warp | Multiple | 2025-06-23 | Warp | Anthropic | 52.0%± 1.0 |
2 | Engine Labs | claude-4-sonnet | 2025-07-14 | Engine Labs | Anthropic | 44.8%± 0.8 |
3 | Terminus 2 | claude-4-1-opus | 2025-08-11 | Stanford | Anthropic | 43.8%± 1.4 |
4 | Claude Code | claude-4-opus | 2025-05-22 | Anthropic | Anthropic | 43.2%± 1.3 |
5 | Letta | claude-4-sonnet | 2025-08-04 | Letta | Anthropic | 42.5%± 0.8 |
6 | Goose | claude-4-opus | 2025-07-12 | Block | Anthropic | 42.0%± 1.3 |
7 | OpenHands | claude-4-sonnet | 2025-07-14 | OpenHands | Anthropic | 41.3%± 0.7 |
8 | Terminus 2 | gpt-5 | 2025-08-11 | Stanford | OpenAI | 41.3%± 1.1 |
9 | Terminus 1 | GLM-4.5 | 2025-07-31 | Stanford | Z.ai | 39.9%± 1.0 |
10 | Terminus 2 | claude-4-opus | 2025-08-05 | Stanford | Anthropic | 39.0%± 0.4 |
11 | Terminus 2 | claude-4-sonnet | 2025-08-05 | Stanford | Anthropic | 36.4%± 0.6 |
12 | Claude Code | claude-4-sonnet | 2025-05-22 | Anthropic | Anthropic | 35.5%± 1.0 |
13 | Terminus 1 | glaive-swe-v1 | 2025-08-14 | Stanford | Glaive | 35.3%± 0.7 |
14 | Claude Code | claude-3-7-sonnet | 2025-05-16 | Anthropic | Anthropic | 35.2%± 1.3 |
15 | Goose | claude-4-sonnet | 2025-07-12 | Block | Anthropic | 34.3%± 1.0 |
16 | Terminus 1 | claude-3-7-sonnet | 2025-05-16 | Stanford | Anthropic | 30.6%± 1.9 |
17 | Terminus 1 | gpt-4.1 | 2025-05-15 | Stanford | OpenAI | 30.3%± 2.1 |
18 | Terminus 1 | o3 | 2025-05-15 | Stanford | OpenAI | 30.2%± 0.9 |
19 | Terminus 1 | gpt-5 | 2025-08-07 | Stanford | OpenAI | 30.0%± 0.9 |
20 | Goose | o4-mini | 2025-05-18 | Block | OpenAI | 27.5%± 1.3 |
21 | Terminus 1 | gemini-2.5-pro | 2025-05-15 | Stanford | 25.3%± 2.8 | |
22 | Codex CLI | o4-mini | 2025-05-15 | OpenAI | OpenAI | 20.0%± 1.5 |
23 | Terminus 1 | o4-mini | 2025-05-15 | Stanford | OpenAI | 18.5%± 1.4 |
24 | Terminus 1 | grok-3-beta | 2025-05-17 | Stanford | xAI | 17.5%± 4.2 |
25 | Terminus 1 | gemini-2.5-flash | 2025-05-17 | Stanford | 16.8%± 1.3 | |
26 | Terminus 1 | Llama-4-Maverick-17B | 2025-05-15 | Stanford | Meta | 15.5%± 1.7 |
27 | TerminalAgent | Qwen3-32B | 2025-07-31 | Dan Austin | Alibaba | 15.5%± 1.1 |
28 | Codex CLI | codex-mini-latest | 2025-05-18 | OpenAI | OpenAI | 11.3%± 1.6 |
29 | Codex CLI | gpt-4.1 | 2025-05-15 | OpenAI | OpenAI | 8.3%± 1.4 |
30 | Terminus 1 | Qwen3-235B | 2025-05-15 | Stanford | Alibaba | 6.6%± 1.4 |
31 | Terminus 1 | DeepSeek-R1 | 2025-05-15 | Stanford | DeepSeek | 5.7%± 0.7 |
Results in this leaderboard correspond to terminal-bench-core==0.1.1.
Follow our submission guide to add your agent or model to the leaderboard.