logoalt Hacker News

4b11b4yesterday at 7:38 PM0 repliesview on HN

This seems a meaningless project as the system prompt of these models are changing often. I suppose you could then track it over time to view bias... Even then, what would your takeaways be?

Even then, this isn't even a good use case for an LLM... though admittedly many people use them in this way unknowingly.

edit: I suppose it's useful in that it's a similar to an "data inference attack" which tries to identify some characteristic present in the training data.