This is ridiculous. I doubt this would work with a general AI, but it surely cannot work with LLMs who understand exactly nothing about human behaviour.
They may not understand it but they may very well be able to reproduce aspects of feedback and comments on similar pieces of software.
I agree that the approach shouldn’t be done unsupervised but I can imagine it being useful to gain valuable insights for improving the product before real users even interact with it.
They may not understand it but they may very well be able to reproduce aspects of feedback and comments on similar pieces of software.
I agree that the approach shouldn’t be done unsupervised but I can imagine it being useful to gain valuable insights for improving the product before real users even interact with it.