logoalt Hacker News

nunodonatoyesterday at 8:58 AM2 repliesview on HN

Only if you use LLMs wrong. Today's models have deep research which will generate a comprehensive analysis with proper citations


Replies

yusinayesterday at 4:06 PM

I think you either didn't read my response or missed the point. No matter if the LLM output is useful or not, the learning outcome is hugely impacted. Negatively.

It's like copying on your homework assignments. Looks like it gets the job done, but the point of a homework assignment is not the result you deliver, it's the process of creating that result which makes you learn something.

msgodelyesterday at 9:05 AM

I feel like I should point out that's the dialog engine not the model itself.

show 1 reply