This is a wildly unrealistic viewpoint. This would assume that you somehow know the language of the client you’re building and have total knowledge over the entire codebase and can easily spot any sort of security issues or backdoors, assuming you’re using software that you yourself didn’t make (and even then).
This also completely disregards the history of vulnerability incidents like XZ Utils, the infected NPM packages of the month, and even for example CVEs that have been found to exist in Linux (a project with thousands of people working on it) for over a decade.
You're conflating two orthogonal threat models here.
Threat model A: I want to be secure against a government agency in my country using the ordinary judicial process to order engineers employed in my country to make technical modifications to products I use in order to spy on me specifically. Predicated on the (untrue in my personal case) idea that my life will be endangered if the government obtains my data.
Threat model B: I want to be secure against all nation state actors in the world who might ever try to surreptitiously backdoor any open source project that has ever existed.
I'm talking about threat model A. You're describing threat model B, and I don't disagree with you that fighting that is more or less futile.
Many open source projects are controlled by people who do not live in the US and are not US citizens. Someone in the US is completely immune to threat model A when they use those open source projects and build them directly from the source.