I didn't take anything as an attack on LLMs. I took it as a severe misunderstanding of how technology works. I specifically outline that I would like to see the margin of error even when integrating actual apps that claim to achieve results, rather than using tools that don't.
None of my claim perceives anything as an attack on LLMs, which shows a mischaracterisation on your part of my entire point.