terminal-bench@2.0 Leaderboard

Note: submissions may not modify timeouts or resources
harbor run -d terminal-bench@2.0 -a "agent" -m "model" -k 5
Note: submissions may not modify timeouts or resources
harbor run -d terminal-bench@2.0 --agent-import-path "path.to.agent:SomeAgent" -k 5

Showing 119 entries

RankAgentModelDateAgent OrgModel Org

Accuracy

1
ForgeCodeGPT-5.42026-03-12ForgeCodeOpenAI

81.8%± 2.0

2
ForgeCodeClaude Opus 4.62026-03-12ForgeCodeAnthropic

81.8%± 1.7

3
TongAgentsGemini 3.1 Pro2026-03-13BIGAIGoogle

80.2%± 2.6

4
ForgeCodeGemini 3.1 Pro2026-03-02ForgeCodeGoogle

78.4%± 1.8

5
SageAgentGPT-5.3-Codex2026-03-13OpenSageOpenAI

78.4%± 2.2

6
DroidGPT-5.3-Codex2026-02-24FactoryOpenAI

77.3%± 2.2

7
CapyClaude Opus 4.62026-03-12CapyAnthropic

75.3%± 2.4

8
Simple CodexGPT-5.3-Codex2026-02-06OpenAIOpenAI

75.1%± 2.4

9
Terminus-KIRAGemini 3.1 Pro2026-02-23KRAFTON AIGoogle

74.8%± 2.6

10
Terminus-KIRAClaude Opus 4.62026-02-22KRAFTON AIAnthropic

74.7%± 2.6

11
MuxGPT-5.3-Codex2026-03-06CoderOpenAI

74.6%± 2.5

12
MAYA-V2Claude 4.6 Opus2026-03-12ADYAAnthropic

72.1%± 2.2

13
TongAgentsClaude Opus 4.62026-02-22BigaiAnthropic

71.9%± 2.7

14
Junie CLIMultiple2026-03-07JetBrainsMultiple

71.0%± 2.9

15
CodeBrain-1GPT-5.3-Codex2026-02-10Feeling AIOpenAI

70.3%± 2.6

16
DroidClaude Opus 4.62026-02-05FactoryAnthropic

69.9%± 2.5

17
AnteGemini 3 Pro2026-01-06Antigma LabsGoogle

69.4%± 2.1

18
CruxClaude Opus 4.62026-02-23RoamAnthropic

66.9%± N/A

19
MuxClaude Opus 4.62026-02-13CoderAnthropic

66.5%± 2.5

20
Deep AgentsGPT-5.2-Codex2026-02-12LangChainOpenAI

66.5%± 3.1

21
SageAgentGemini 3 Pro2026-02-23OpenSageGoogle

65.2%± 2.1

22
DroidGPT-5.22025-12-24FactoryOpenAI

64.9%± 2.8

23
Terminus 2GPT-5.3-Codex2026-02-05Terminal BenchOpenAI

64.7%± 2.7

24
Junie CLIGemini 3 Flash2025-12-23JetBrainsGoogle

64.3%± 2.8

25
DroidClaude Opus 4.52025-12-11FactoryAnthropic

63.1%± 2.7

26
Codex CLIGPT-5.22025-12-18OpenAIOpenAI

62.9%± 3.0

27
Terminus 2Claude Opus 4.62026-02-06Terminal BenchAnthropic

62.9%± 2.7

28
CodeBrain-1Gemini 3 Pro2026-02-05Feeling AIGoogle

62.2%± 2.6

29
II-AgentGemini 3 Pro2025-12-23Intelligent InternetGoogle

61.8%± 2.8

30
WarpMultiple2025-12-12WarpMultiple

61.2%± 3.0

31
DroidGemini 3 Pro2025-12-24FactoryGoogle

61.1%± 2.8

32
MuxGPT-5.22026-01-17CoderOpenAI

60.7%± N/A

33
Codex CLIGPT-5.1-Codex-Max2025-11-24OpenAIOpenAI

60.4%± 2.7

34
WarpMultiple2025-11-20WarpMultiple

59.1%± 2.8

35
Letta CodeClaude Opus 4.52025-12-17LettaAnthropic

59.1%± 2.4

36
Abacus AI DesktopMultiple2025-12-11Abacus.AIMultiple

58.4%± 2.8

37
MuxClaude Opus 4.52026-01-17CoderAnthropic

58.4%± N/A

38
Claude CodeClaude Opus 4.62026-02-07AnthropicAnthropic

58.0%± 2.9

39
Terminus 2Claude Opus 4.52025-11-22Terminal BenchAnthropic

57.8%± 2.5

40
CruxGPT-5.1-Codex2025-11-16RoamOpenAI

57.8%± 2.9

41
Terminus 2Gemini 3 Pro2025-11-21Terminal BenchGoogle

56.9%± 2.5

42
Letta CodeGemini 3 Pro2025-12-17LettaGoogle

56.0%± 3.0

43
GooseClaude Opus 4.52025-12-11BlockAnthropic

54.3%± 2.6

44
Terminus 2GPT-5.22025-12-12Terminal BenchOpenAI

54.0%± 2.9

45
Letta CodeGPT-5.1-Codex2025-12-17LettaOpenAI

53.5%± 2.8

46
Terminus 2GLM 52026-02-23Terminal BenchZ-AI

52.4%± 2.6

47
Claude CodeClaude Opus 4.52025-12-18AnthropicAnthropic

52.1%± 2.5

48
OpenHandsClaude Opus 4.52026-01-04OpenHandsAnthropic

51.9%± 2.9

49
OpenCodeClaude Opus 4.52026-01-12Anomaly InnovationsAnthropic

51.7%± N/A

50
Terminus 2Gemini 3 Flash2026-01-07Terminal BenchGoogle

51.7%± 3.1

51
WarpMultiple2025-11-11WarpMultiple

50.1%± 2.7

52
Codex CLIGPT-52025-11-04OpenAIOpenAI

49.6%± 2.9

53
Terminus 2GPT-5.12025-11-16Terminal BenchOpenAI

47.6%± 2.8

54
Gemini CLIGemini 3 Flash2026-03-06GoogleGoogle

47.4%± 3.0

55
CAMEL-AIClaude Sonnet 4.52025-12-24CAMEL-AIAnthropic

46.5%± 2.4

56
Codex CLIGPT-5-Codex2025-11-04OpenAIOpenAI

44.3%± 2.7

57
OpenHandsGPT-52025-11-02OpenHandsOpenAI

43.8%± 3.0

58
Terminus 2GPT-5-Codex2025-10-31Terminal BenchOpenAI

43.4%± 2.9

59
Terminus 2Kimi K2.52026-02-04Terminal BenchKimi

43.2%± 2.9

60
GooseClaude Sonnet 4.52025-12-11BlockAnthropic

43.1%± 2.6

61
CruxGPT-5.1-Codex-Mini2025-11-17RoamOpenAI

43.1%± 3.0

62
Terminus 2Claude Sonnet 4.52025-10-31Terminal BenchAnthropic

42.8%± 2.8

63
MAYA-V2Claude 4.5 Sonnet2026-01-04ADYAAnthropic

42.7%± N/A

64
OpenHandsClaude Sonnet 4.52025-11-02OpenHandsAnthropic

42.6%± 2.8

65
Mini-SWE-AgentClaude Sonnet 4.52025-11-03PrincetonAnthropic

42.5%± 2.8

66
Terminus 2Minimax m2.52026-02-23Terminal BenchMinimax

42.2%± 2.6

67
Mini-SWE-AgentGPT-5-Codex2025-11-03PrincetonOpenAI

41.3%± 2.8

68
Claude CodeClaude Sonnet 4.52025-11-04AnthropicAnthropic

40.1%± 2.9

69
Terminus 2DeepSeek-V3.22026-02-10Terminal BenchDeepSeek

39.6%± 2.8

70
Terminus 2Claude Opus 4.12025-10-31Terminal BenchAnthropic

38.0%± 2.6

71
Terminus 2GPT-5.1-Codex2025-11-17Terminal BenchOpenAI

36.9%± 3.2

72
OpenHandsClaude Opus 4.12025-11-02OpenHandsAnthropic

36.9%± 2.7

73
CruxMiniMax M2.12025-12-22RoamMiniMax

36.6%± 2.9

74
Terminus 2Kimi K2 Thinking2025-11-11Terminal BenchMoonshot AI

35.7%± 2.8

75
GooseClaude Haiku 4.52025-12-11BlockAnthropic

35.5%± 2.9

76
Terminus 2GPT-52025-10-31Terminal BenchOpenAI

35.2%± 3.1

77
Mini-SWE-AgentClaude Opus 4.12025-11-03PrincetonAnthropic

35.1%± 2.5

78
Claude CodeClaude Opus 4.12025-11-04AnthropicAnthropic

34.8%± 2.9

79
spoox-mGPT-5-Mini2025-12-24TUMOpenAI

34.8%± 2.7

80
Mini-SWE-AgentGPT-52025-11-03PrincetonOpenAI

33.9%± 2.9

81
Terminus 2GLM 4.72026-01-28Terminal BenchZ-AI

33.4%± 2.8

82
CruxGLM 4.72026-02-08RoamZ-AI

33.3%± 2.5

83
Terminus 2Gemini 2.5 Pro2025-10-31Terminal BenchGoogle

32.6%± 3.0

84
Codex CLIGPT-5-Mini2025-11-04OpenAIOpenAI

31.9%± 3.0

85
Terminus 2MiniMax M22025-11-01Terminal BenchMiniMax

30.0%± 2.7

86
Mini-SWE-AgentClaude Haiku 4.52025-11-03PrincetonAnthropic

29.8%± 2.5

87
OpenHandsGPT-5-Mini2025-11-02OpenHandsOpenAI

29.2%± 2.8

88
Terminus 2MiniMax M2.12025-12-23Terminal BenchMiniMax

29.2%± 2.9

89
Terminus 2Claude Haiku 4.52025-10-31Terminal BenchAnthropic

28.3%± 2.9

90
Terminus 2Kimi K2 Instruct2025-11-01Terminal BenchMoonshot AI

27.8%± 2.5

91
Claude CodeClaude Haiku 4.52025-11-04AnthropicAnthropic

27.5%± 2.8

92
Dakou AgentQwen 3 Coder 480B2025-12-28iflowAlibaba

27.2%± 2.6

93
OpenHandsGrok 42025-11-02OpenHandsxAI

27.2%± 3.1

94
OpenHandsKimi K2 Instruct2025-11-02OpenHandsMoonshot AI

26.7%± 2.7

95
Mini-SWE-AgentGemini 2.5 Pro2025-11-03PrincetonGoogle

26.1%± 2.5

96
Mini-SWE-AgentGrok Code Fast 12025-11-03PrincetonxAI

25.8%± 2.6

97
OpenHandsQwen 3 Coder 480B2025-11-02OpenHandsAlibaba

25.4%± 2.6

98
Mini-SWE-AgentGrok 42025-11-03PrincetonxAI

25.4%± 2.9

99
Terminus 2GLM 4.62025-11-01Terminal BenchZ.ai

24.5%± 2.4

100
Terminus 2GPT-5-Mini2025-10-31Terminal BenchOpenAI

24.0%± 2.5

101
Terminus 2Qwen 3 Coder 480B2025-11-01Terminal BenchAlibaba

23.9%± 2.8

102
Terminus 2Grok 42025-10-31Terminal BenchxAI

23.1%± 2.9

103
Mini-SWE-AgentGPT-5-Mini2025-11-03PrincetonOpenAI

22.2%± 2.6

104
Gemini CLIGemini 2.5 Pro2025-11-04GoogleGoogle

19.6%± 2.9

105
Terminus 2GPT-OSS-120B2025-11-01Terminal BenchOpenAI

18.7%± 2.7

106
Mini-SWE-AgentGemini 2.5 Flash2025-11-03PrincetonGoogle

17.1%± 2.5

107
Terminus 2Gemini 2.5 Flash2025-10-31Terminal BenchGoogle

16.9%± 2.4

108
OpenHandsGemini 2.5 Flash2025-11-02OpenHandsGoogle

16.4%± 2.4

109
OpenHandsGemini 2.5 Pro2025-11-02OpenHandsGoogle

16.4%± 2.8

110
Gemini CLIGemini 2.5 Flash2025-11-04GoogleGoogle

15.4%± 2.3

111
Terminus 2Grok Code Fast 12025-10-31Terminal BenchxAI

14.2%± 2.5

112
Mini-SWE-AgentGPT-OSS-120B2025-11-03PrincetonOpenAI

14.2%± 2.3

113
OpenHandsClaude Haiku 4.52025-11-02OpenHandsAnthropic

13.9%± 2.7

114
Codex CLIGPT-5-Nano2025-11-04OpenAIOpenAI

11.5%± 2.3

115
OpenHandsGPT-5-Nano2025-11-02OpenHandsOpenAI

9.9%± 2.1

116
Terminus 2GPT-5-Nano2025-10-31Terminal BenchOpenAI

7.9%± 1.9

117
Mini-SWE-AgentGPT-5-Nano2025-11-03PrincetonOpenAI

7.0%± 1.9

118
Mini-SWE-AgentGPT-OSS-20B2025-11-03PrincetonOpenAI

3.4%± 1.4

119
Terminus 2GPT-OSS-20B2025-11-01Terminal BenchOpenAI

3.1%± 1.5

Results in this leaderboard correspond to terminal-bench@2.0.

Send us an email to submit your agents' results: alex@laude.org mikeam@cs.stanford.edu

A Terminal-Bench team member ran the evaluation and verified the results.

Displaying 119 of 119 available entries