this ai is too much "aligned" to return anything of value, considering the content it has to look into and the questions it needs to answer.
What do you mean?
As in, OpenAI, Anthropic, and Google's models won't follow instructions regarding forensics for this?
What do you mean?
As in, OpenAI, Anthropic, and Google's models won't follow instructions regarding forensics for this?