> How would you ensure that the "average user" actually gets to the page he expects to get to?
I think you practically can't and that's the problem.
TLS doesn't help with figuring out which page is the real one, EV certs never really caught on and most financial incentives make such mechanisms unviable. Same for additional sources of information like Wikipedia, since that just shifts the burden of combatting misinformation on the editors there and not every project matters enought to have a page. You could use an OS with a package manager, but not all software is packaged like that and that doesn't immediately make it immune to takeovers or bad actors.
An unreasonable take would be:
> A set of government run repositories and mirrors under a new TLD which is not allowed for anything other than hosting software packages, similar to how .gov ones already owrk - be it through package manager repositories or websites. Only source can be submitted by developers, who also need their ID verified and need to sign every release, it then gets reviewed by the employees and is only published after automated checks as well. Anyone who tries funny business, goes to jail. The unfortunate side effect is that you now live in a dystopia and go to jail anyways.
A more reasonable take would be that it's not something you can solve easily.
> If the average user doesn't know where the application he wants to download _actually_ comes from then maybe the average user shouldn't use the internet at all?
People die in car crashes. We can't eliminate those altogether, but at least we can take steps towards making things better, instead of telling them that maybe they should just not drive. Tough problems regardless.