It does help A LOT in the case of security research. Particularly.
For example, I tended to avoid pen testing freelance work before AI because I didn't enjoy the tedious work of reading tons of documentation about random platforms to try to understand how they worked and searching all over StackOverflow.
Now with LLMs, I can give it some random-looking error message and it can clearly and instantly tell me what the error means at a deep tech level, what engine was used, what version, what library/module... I can pen test platforms I have 0 familiarity with.
I just know a few platforms, engines, programming languages really well and I can use this existing knowledge to try to find parallels in other platforms I've never explored before.
The other day, on HackerOne, I found a pretty bad DoS vulnerability in a platform I'd never looked into before, using an engine and programming language I never used professionally; I found the issue within 1 hour of starting my search.
Did you spend another hour confirming your understanding?