> compared this to researching on the internet, there are some good aspects, but more often than not, I end up reading an opinionated post by someone (no matter the topic, if you go deep enough, you will land on an opinionated factual telling).
ChatGPT is in fact opinionated, it has numerous political positions ("biases") and holds some subjects taboo. The difference is that a single actor chooses the political opinions of the model that goes on to interact with many more people than a single opinion piece might.
Yes that is true. Though that can be subsumed if you notice it, and ask the model to ignore those biases. (an extreme example would be opposition prep for a debate). I am not interested in politics and other related issues anyway.
An example (over 1 year old): https://www.reddit.com/r/LateStageCapitalism/comments/17dmev...
Fine. But it would never occur to me to try to form political opinions using chatgpt.
Political searches I assume would be very very minor percentage of real learning. Even in such cases, I would rather rely on a good LLMs response than scrounging websites of mainstream media or blogs etc. For an objective response, reading through opinionated articles and forming my opinion is an absolute waste of time. I'd want the truth as accurately as possible. Plus people don't generally change political opinions based what they read. They read stuff aligning with their side.