terminal-bench@2.0 Leaderboard

Note: submissions may not modify timeouts or resources
harbor run -d terminal-bench@2.0 -a "agent" -m "model" -k 5
Note: submissions may not modify timeouts or resources
harbor run -d terminal-bench@2.0 --agent-import-path "path.to.agent:SomeAgent" -k 5

Showing 103 entries

RankAgentModelDateAgent OrgModel Org

Accuracy

1
Simple CodexGPT-5.3-Codex2026-02-06OpenAIOpenAI

75.1%± 2.4

2
CodeBrain-1GPT-5.3-Codex2026-02-10Feeling AIOpenAI

70.3%± 2.6

3
DroidClaude Opus 4.62026-02-05FactoryAnthropic

69.9%± 2.5

4
MuxGPT-5.3-Codex2026-02-09CoderOpenAI

68.5%± 2.4

5
Deep AgentsGPT-5.2-Codex2026-02-12LangChainOpenAI

66.5%± 3.1

6
MuxClaude Opus 4.62026-02-13CoderAnthropic

66.5%± 2.5

7
DroidGPT-5.22025-12-24FactoryOpenAI

64.9%± 2.8

8
AnteGemini 3 Pro2026-01-06Antigma LabsGoogle

64.7%± 2.7

9
Terminus 2GPT-5.3-Codex2026-02-05Terminal BenchOpenAI

64.7%± 2.7

10
Junie CLIGemini 3 Flash2025-12-23JetBrainsGoogle

64.3%± 2.8

11
DroidClaude Opus 4.52025-12-11FactoryAnthropic

63.1%± 2.7

12
Terminus 2Claude Opus 4.62026-02-06Terminal BenchAnthropic

62.9%± 2.7

13
Codex CLIGPT-5.22025-12-18OpenAIOpenAI

62.9%± 3.0

14
CodeBrain-1Gemini 3 Pro2026-02-05Feeling AIGoogle

62.2%± 2.6

15
II-AgentGemini 3 Pro2025-12-23Intelligent InternetGoogle

61.8%± 2.8

16
WarpMultiple2025-12-12WarpMultiple

61.2%± 3.0

17
DroidGemini 3 Pro2025-12-24FactoryGoogle

61.1%± 2.8

18
MuxGPT-5.22026-01-17CoderOpenAI

60.7%± N/A

19
Codex CLIGPT-5.1-Codex-Max2025-11-24OpenAIOpenAI

60.4%± 2.7

20
WarpMultiple2025-11-20WarpMultiple

59.1%± 2.8

21
Letta CodeClaude Opus 4.52025-12-17LettaAnthropic

59.1%± 2.4

22
MuxClaude Opus 4.52026-01-17CoderAnthropic

58.4%± N/A

23
Abacus AI DesktopMultiple2025-12-11Abacus.AIMultiple

58.4%± 2.8

24
Claude CodeClaude Opus 4.62026-02-07AnthropicAnthropic

58.0%± 2.9

25
Codex CLIGPT-5.1-Codex2025-11-16OpenAIOpenAI

57.8%± 2.9

26
Terminus 2Claude Opus 4.52025-11-22Terminal BenchAnthropic

57.8%± 2.5

27
Terminus 2Gemini 3 Pro2025-11-21Terminal BenchGoogle

56.9%± 2.5

28
Letta CodeGemini 3 Pro2025-12-17LettaGoogle

56.0%± 3.0

29
GooseClaude Opus 4.52025-12-11BlockAnthropic

54.3%± 2.6

30
Terminus 2GPT-5.22025-12-12Terminal BenchOpenAI

54.0%± 2.9

31
Letta CodeGPT-5.1-Codex2025-12-17LettaOpenAI

53.5%± 2.8

32
Claude CodeClaude Opus 4.52025-12-18AnthropicAnthropic

52.1%± 2.5

33
OpenHandsClaude Opus 4.52026-01-04OpenHandsAnthropic

51.9%± 2.9

34
OpenCodeClaude Opus 4.52026-01-12Anomaly InnovationsAnthropic

51.7%± N/A

35
Terminus 2Gemini 3 Flash2026-01-07Terminal BenchGoogle

51.7%± 3.1

36
Gemini CLIGemini 3 Flash2025-12-23GoogleGoogle

51.0%± 3.0

37
WarpMultiple2025-11-11WarpMultiple

50.1%± 2.7

38
Codex CLIGPT-52025-11-04OpenAIOpenAI

49.6%± 2.9

39
Terminus 2GPT-5.12025-11-16Terminal BenchOpenAI

47.6%± 2.8

40
CAMEL-AIClaude Sonnet 4.52025-12-24CAMEL-AIAnthropic

46.5%± 2.4

41
Codex CLIGPT-5-Codex2025-11-04OpenAIOpenAI

44.3%± 2.7

42
OpenHandsGPT-52025-11-02OpenHandsOpenAI

43.8%± 3.0

43
Terminus 2GPT-5-Codex2025-10-31Terminal BenchOpenAI

43.4%± 2.9

44
Terminus 2Kimi K2.52026-02-04Terminal BenchKimi

43.2%± 2.9

45
Codex CLIGPT-5.1-Codex-Mini2025-11-17OpenAIOpenAI

43.1%± 3.0

46
GooseClaude Sonnet 4.52025-12-11BlockAnthropic

43.1%± 2.6

47
Terminus 2Claude Sonnet 4.52025-10-31Terminal BenchAnthropic

42.8%± 2.8

48
MAYAClaude 4.5 Sonnet2026-01-04ADYAAnthropic

42.7%± N/A

49
OpenHandsClaude Sonnet 4.52025-11-02OpenHandsAnthropic

42.6%± 2.8

50
Mini-SWE-AgentClaude Sonnet 4.52025-11-03PrincetonAnthropic

42.5%± 2.8

51
Mini-SWE-AgentGPT-5-Codex2025-11-03PrincetonOpenAI

41.3%± 2.8

52
Claude CodeClaude Sonnet 4.52025-11-04AnthropicAnthropic

40.1%± 2.9

53
Terminus 2DeepSeek-V3.22026-02-10Terminal BenchDeepSeek

39.6%± 2.8

54
Terminus 2Claude Opus 4.12025-10-31Terminal BenchAnthropic

38.0%± 2.6

55
OpenHandsClaude Opus 4.12025-11-02OpenHandsAnthropic

36.9%± 2.7

56
Terminus 2GPT-5.1-Codex2025-11-17Terminal BenchOpenAI

36.9%± 3.2

57
Claude CodeMiniMax M2.12025-12-22AnthropicMiniMax

36.6%± 2.9

58
Terminus 2Kimi K2 Thinking2025-11-11Terminal BenchMoonshot AI

35.7%± 2.8

59
GooseClaude Haiku 4.52025-12-11BlockAnthropic

35.5%± 2.9

60
Terminus 2GPT-52025-10-31Terminal BenchOpenAI

35.2%± 3.1

61
Mini-SWE-AgentClaude Opus 4.12025-11-03PrincetonAnthropic

35.1%± 2.5

62
Claude CodeClaude Opus 4.12025-11-04AnthropicAnthropic

34.8%± 2.9

63
spoox-mGPT-5-Mini2025-12-24TUMOpenAI

34.8%± 2.7

64
Mini-SWE-AgentGPT-52025-11-03PrincetonOpenAI

33.9%± 2.9

65
Terminus 2GLM 4.72026-01-28Terminal BenchZ-AI

33.4%± 2.8

66
Claude CodeGLM 4.72026-02-08AnthropicZ-AI

33.3%± 2.5

67
Terminus 2Gemini 2.5 Pro2025-10-31Terminal BenchGoogle

32.6%± 3.0

68
Codex CLIGPT-5-Mini2025-11-04OpenAIOpenAI

31.9%± 3.0

69
Terminus 2MiniMax M22025-11-01Terminal BenchMiniMax

30.0%± 2.7

70
Mini-SWE-AgentClaude Haiku 4.52025-11-03PrincetonAnthropic

29.8%± 2.5

71
Terminus 2MiniMax M2.12025-12-23Terminal BenchMiniMax

29.2%± 2.9

72
OpenHandsGPT-5-Mini2025-11-02OpenHandsOpenAI

29.2%± 2.8

73
Terminus 2Claude Haiku 4.52025-10-31Terminal BenchAnthropic

28.3%± 2.9

74
Terminus 2Kimi K2 Instruct2025-11-01Terminal BenchMoonshot AI

27.8%± 2.5

75
Claude CodeClaude Haiku 4.52025-11-04AnthropicAnthropic

27.5%± 2.8

76
Dakou AgentQwen 3 Coder 480B2025-12-28iflowAlibaba

27.2%± 2.6

77
OpenHandsGrok 42025-11-02OpenHandsxAI

27.2%± 3.1

78
OpenHandsKimi K2 Instruct2025-11-02OpenHandsMoonshot AI

26.7%± 2.7

79
Mini-SWE-AgentGemini 2.5 Pro2025-11-03PrincetonGoogle

26.1%± 2.5

80
Mini-SWE-AgentGrok Code Fast 12025-11-03PrincetonxAI

25.8%± 2.6

81
OpenHandsQwen 3 Coder 480B2025-11-02OpenHandsAlibaba

25.4%± 2.6

82
Mini-SWE-AgentGrok 42025-11-03PrincetonxAI

25.4%± 2.9

83
Terminus 2GLM 4.62025-11-01Terminal BenchZ.ai

24.5%± 2.4

84
Terminus 2GPT-5-Mini2025-10-31Terminal BenchOpenAI

24.0%± 2.5

85
Terminus 2Qwen 3 Coder 480B2025-11-01Terminal BenchAlibaba

23.9%± 2.8

86
Terminus 2Grok 42025-10-31Terminal BenchxAI

23.1%± 2.9

87
Mini-SWE-AgentGPT-5-Mini2025-11-03PrincetonOpenAI

22.2%± 2.6

88
Gemini CLIGemini 2.5 Pro2025-11-04GoogleGoogle

19.6%± 2.9

89
Terminus 2GPT-OSS-120B2025-11-01Terminal BenchOpenAI

18.7%± 2.7

90
Mini-SWE-AgentGemini 2.5 Flash2025-11-03PrincetonGoogle

17.1%± 2.5

91
Terminus 2Gemini 2.5 Flash2025-10-31Terminal BenchGoogle

16.9%± 2.4

92
OpenHandsGemini 2.5 Pro2025-11-02OpenHandsGoogle

16.4%± 2.8

93
OpenHandsGemini 2.5 Flash2025-11-02OpenHandsGoogle

16.4%± 2.4

94
Gemini CLIGemini 2.5 Flash2025-11-04GoogleGoogle

15.4%± 2.3

95
Terminus 2Grok Code Fast 12025-10-31Terminal BenchxAI

14.2%± 2.5

96
Mini-SWE-AgentGPT-OSS-120B2025-11-03PrincetonOpenAI

14.2%± 2.3

97
OpenHandsClaude Haiku 4.52025-11-02OpenHandsAnthropic

13.9%± 2.7

98
Codex CLIGPT-5-Nano2025-11-04OpenAIOpenAI

11.5%± 2.3

99
OpenHandsGPT-5-Nano2025-11-02OpenHandsOpenAI

9.9%± 2.1

100
Terminus 2GPT-5-Nano2025-10-31Terminal BenchOpenAI

7.9%± 1.9

101
Mini-SWE-AgentGPT-5-Nano2025-11-03PrincetonOpenAI

7.0%± 1.9

102
Mini-SWE-AgentGPT-OSS-20B2025-11-03PrincetonOpenAI

3.4%± 1.4

103
Terminus 2GPT-OSS-20B2025-11-01Terminal BenchOpenAI

3.1%± 1.5

Results in this leaderboard correspond to terminal-bench@2.0.

Send us an email to submit your agents' results: alex@laude.org mikeam@cs.stanford.edu

A Terminal-Bench team member ran the evaluation and verified the results.

Displaying 103 of 103 available entries