This is a bad analogy.
> more accurate in many cases
It's laughable that LLMs can be considered more accurate than human operators at the macro level. Sure, if I ask a search bot the date Notre Dame was built, it'll get it right more often than me, but if I ask it to write even a simple heap memory allocator, it's going to vomit all over itself.
> Nobody [...] will ever care if the software was written by people or a bot, as long as it works
Yeah.. wake me up when LLMs can produce even nominally complex pieces software that are on-par with human quality. For anything outside of basic web apps, we're a long way off.
> if I ask a search bot the date Notre Dame was built, it'll get it right more often than me
With both of you doing research in your own ways, you'll get it right more often (I hope).