I would say that the real reason is because "it works". As simple as that.
The first thing you need when you make something new is making it work, it is much better that it works badly than having something not working at all.
Take for example the Newcomen engine, with an abysmal efficiency of half a percent. You needed 90 times more fuel than an engine today, so it could only be used in the mines were the fuel was.
It worked badly, but it worked. Later came efficiency.
The same happened with locomotives. So bad efficiency at first, but it changed the world.
The first thing AI people had to do is making it work in all OSes. Yeah, it works badly but it works.
We downloaded some Clojure editor made in java to test if we were going to deploy it in our company. It gave us some obscure java error in different OSes like linux or Mac configurations. We discarded it. It did not work.
We have engineers and we can fix those issues but it is not worth it. The people that made this software do not understand basic things.
We have Claude working in hundreds of computers with different OSes. It just works.
They could have done better. They chose the path of least resistance, putting in the least amount of effort, spending the least amount of resources into accomplishing a task.
There's nothing "good" about electron. Hell, there are even easier ways of getting high performance cross platform software out there. Electron was used because it's a default, defacto choice that nobody bothered with even researching or testing if it was the right choice, or even a good choice.
"It just works". A rabid raccoon mashing its face on a keyboard could plausibly produce a shippable electron app. Vibe-bandit development. (This is not a selling point.) People claiming to be software developers should aim to do better.