Can you share more about the challenges ran into on the benchmarking? According to the benchmark note, Claude 4.5 Opus and Gemini 3 Pro Preview exhibited elevated rejection and were dropped from TruthfulQA without further discussion. To me this begs the questions, does this indicated that frontier closed SOTA model will likely not allow this approach in the future (ie in the process of screening for potential attack vectors) and/or that this approach will only be limited to a certain LLM architecture? If it’s an architecture limitation, it’s worth discussing chaining for easier policy enforcement.
I checked with the team and it may have been some temporary rate-limiting issue. We've rectified the results, it seems to be an isolated case.
https://www.ctgt.ai/benchmarks