This is very confusingly written.
From the post I expected that the tasks were about analysing traces, but all the tasks in the repository are about adding instrumentation to code!
Some of the instructions don't give any guidance how to do it, some specify which libraries to use.
"Use standard OTEL patterns" ... that's about as useful as saying "go write some code". There are a lot of ways to do instrumentation....
I'd be very curious HOW exactly the models fail.
Are the test sets just incredibly specific about what output they except, and you get a lot of failures because of tiny subtle mismatches? Or do they just get the instrumentation categorically wrong?
Also important: do the models have access to a web search tool to read the library docs? Otel libraries are often complicated to use... without reading latest docs or source code this would be quite tricky.
Some models have gotten better at adding dependencies, installing them and then reading the code from the respective directory where dependencies get stored, but many don't do well with this.
All in all, I'm very skeptical that this is very useful as a benchmark as is.
I'd be much more interested in tasks like:
Here are trace/log outputs , here is the source code, find and fix the bug.
> "Use standard OTEL patterns" ... that's about as useful as saying "go write some code".
People say to say things like "Use best practices" in your prompts all the time, and chide people who don't.
>Some of the instructions don't give any guidance how to do it, some specify which libraries to use.
In supporting a piece of cloud software with a lot of microservices I think this is a more generalized problem for humans. The app I work with demanded some logging requirements like the library to use. But that was it, different parts by different teams ended up with all kinds of different behaviors.
As for the AI side, this is something where I see our limited context sizes causing issues when developing architecture across multiple products.
Looked into some tests and the tasks are definitely AI written. I think then a separate AI call generated the test.
Like with robotaxi, ok, the thing is not perfect, but how does this compare to an human ? I'm interviewing OPS / SRE at the moment , and i'm not so happy with what I see...
+1 I’m not sure if tasks like Add OTel instrumentation belongs more in a Coding bench than an SRE bench. I came here expecting to see things like, this is how Models perform on finding the root cause in 50 complicated microservice failure scenarios.
For AI-SRE tasks like finding root cause of bugs and errors, I believe the key is to provide tools to the agent to query metrics, logs, traces and understand the problem. I’m working on a similar OSS framework and benchmark (work in progress using metrics and logs - demo - https://youtube.com/playlist?list=PLKWJ03cHcPr3Od1rwL7ErHW1p...), where context is Semantics and Text2SQL to query the right metrics, logs and benchmark is on a set of Skills that Claude code or other agents can run using these tools to find the root cause of errors:
Codd Semantic/Text2SQL engine: https://github.com/sathish316/codd_query_engine
PreCogs skills and simulated scenarios: https://github.com/sathish316/precogs_sre_oncall_skills