Imagine throwing orders of magnitude more of compute at things - we may have things like a monte carlo tree search for LLM outputs using an LLMJudge that prunes the tree.
+ we can continuously let a LLM monitor our log files and alert/propose/fix issues 24/7. If intelligence becomes cheap enough this would be an enormous market.
Having a LLM run as "fact checker" /coach for everything that you write also would be a great addition.
+ we can continuously let a LLM monitor our log files and alert/propose/fix issues 24/7. If intelligence becomes cheap enough this would be an enormous market.
Having a LLM run as "fact checker" /coach for everything that you write also would be a great addition.