Not necessarily. I found a benchmark that you can run yourself, that's doing pretty much just raw compute (JS vs C/C++ in Wasm):
https://takahirox.github.io/WebAssembly-benchmark/
Js is not always faster, but in a good chunk of cases it is.
Things might be getting better for JS, but just looking over those briefly, they don't look memory constrained, which is the main place where I've seen significant speedups. Also, simpler code makes JIT optimizations look better, but that level of performance won't be consistent in real world code.
I would take these benchmarks with a pinch of salt. Within a single function, it's very easy to optimize JS because you know every way a single variable will be defined. When you have to call a function, the data type of the argument can be anything the caller passes to the function, which makes optimization far more complex.
In practice, WASM codebases won't be simply running a single pure function in WASM from JS but instead will have several data structures being passed around from one WASM function to another, and that's going to be faster than doing the same in JS.
By the way, if I remember correctly V8 can optimize function calls heuristically if every call always passes the same argument types, but because this is an implementation detail it's difficult to know what scenarios are actually optimized and which are not.
It is easy to make benchmarks where JS is faster. JS inlines at runtime, while wasm typically does not, so if you have code where the wasm toolchain makes a poor inlining decision at compile time, then JS can easily win.
But that is really only common in small computational kernels. If you take a large, complex application like Adobe Photoshop or a Unity game, wasm will be far closer to native speed, because its compilation and optimization approach is much closer to native builds (types known ahead of time, no heavy dependency on tiering and recompilation, etc.).