It's tricky to decode all this but there are a lot of misconceptions.
First, amdahl's law just says that the non parallel parts of a program become more of a bottleneck as the parallel parts are split up more. It's trivial and obvious, it has nothing to do with being able to scale to more cores because it has nothing to do with how much can be parallelized.
Second in your other comment, there is nothing special about "rust having the semantics" for NUMA. People have been programming NUMA machines since they existed (obviously). NUMA just means that some memory addresses are local and some are not local. If you want things to be fast you need to use the addresses that are local as much as possible.