logoalt Hacker News

mbestolast Monday at 6:21 PM2 repliesview on HN

I think the discrepancy is this:

1. We trained it on a fraction of the world's information (e.g. text and media that is explicitly online)

2. It carries all of the biases us humans have and worse the biases that are present in the information we chose to explicitly share online (which may or may not be different to the experiences humans have in every day life)


Replies

nix0nlast Monday at 7:11 PM

> It carries all of the biases us humans have and worse the biases that are present in the information we chose to explicitly share online

This is going to be a huge problem. Most people assume computers are unbiased and rational, and increasing use of AI will lead to more and larger decisions being made by AI.

aryehofyesterday at 8:37 AM

I see this a lot in what LLMs know and promote in terms of software architecture.

All seem biased to recent buzzwords and approaches. Discussions will include the same hand-waving of DDD, event-sourcing and hexagonal services, i.e. the current fashion. Nothing of worth apparently preceded them.

I fear that we are condemned to a future where there is no new novel progress, but just a regurgitation of those current fashion and biases.