I guess it depends on your definition of "intentionally"... maybe I am giving people too much credit, but I have a feeling that dark patterns are used not because the implementers learn about them as transparently exploitive techniques and pursue them, but because the implementers are willfully ignorant and choose to chase results without examining the costs (and ignoring the costs when they do learn about them). I am not saying this morally excuses the behavior, but I think it does mean it is not that different than what is happening with LLMs. Just as choosing an innocuous seeming rule like "if a social media post generates a lot of comments, show it to more people" can lead to the dark pattern of showing more and more people misleading content that causes societal division, choosing to optimize an LLM for user approval leads to the dark pattern of sycophantic LLMs that will increase user's isolation and delusions.
Maybe we have different definitions of dark patterns.