Highly unethical in my opinion to not disclose the tool is summarizing via an LLM. As a matter of fact, in the right circumstances it may not only fail to do what the title says, but do the opposite - add hallucinations or other AI generated garbage!
fighting AI slop with more AI slop, it just keeps getting more ridiculous in tech world.