Maybe someonw can explain me, but I never understood the appeal of GPIB for modern instruments (legacy instruments are of course "excused"). Electrically is a terrible interface that introduces ground loops with the control computer. Speed are laughable and it requires exensive and exotic adapters with complex sw stack (I wish this projects good success, it's needed!). Ethernet in comparison tick all my boxes. It's electrically decoupled by default (just use UTP cables), crazy cheap, very fast and with sane sw stack thanks to vxi-11. You can even bypass visa if you wish and open a plain TCP socket, no need for any library. What am I missing?
If you buy used equipment which doesn't have Ethernet or your company wants you to ise the stuff that is in the Lab since 10+ years there's simply no other choice. Or companies that see Ethernet as a potential security attack vector. It's indeed not that GPIB is better than Ethernet. In tiny aspects that's argueable, but as general statement true.
Nothing, other than perhaps just much high-quality legacy equipment is out in the field that still works fantastically.
> What am I missing?
Not much, but consider latency: You can use the Group Execute Trigger (GET) to simultaneously trigger multiple instruments with both very low latency and very low latency dispersion. Think, easy-to-use sub-microsecond synchronization.
Ethernet and USB 4 may have orders of magnitude more bandwidth, but can’t achieve the same multi-device synchronization capability without side channel signals.
Now, sure, you can add the same capability with a programmable pulse generator connected via coax to the trigger input of all your instruments, but GBIP lets you do that with just the data connection (and you don’t always have a spare trigger channel). The only other protocols I know of with similar capabilities are PXI and PXIe, which are “PCI(express) in an incompatible form-factor, plus some extra signals for real time synchronization”.