Isn't there a lot of daylight between old Fortran code and AI?
What if we rewrote the old algorithms in C with modern techniques? Multitthreading? Or GPU compute? If there's value there, I could do these things. Probably wouldn't take that long
The issue isn't that the Fortran code is too slow. The issue is that the problem description is super complicated, hard to measure and very hard to control is ways that noone really understands. However, you can just plug the measurement outputs and system inputs to some controller. This machine learning helps control by jointly modeling imperfections in the physical model, imperfections in the hardware that controls things, and imperfections in the measurements. That's something you just really don't want to even attempt writing by hand (in whatever programming language).
This is what they would like to do, but it is hard to get funding for such an effort or to prioritize such work. It sounds like they think it would take a long time, and it wouldn't yield high impact papers from just doing that exercise.
Plus, even after doing that, there would still be a sim2real gap. The goal of our research is to use physics informed deep learning and methods with strong inductive biases, combined with transfer learning and low-shot learning to overcome the sim2real gap.
Rewriting the old Fortran code in C will probably make it slower with new bugs. A smarter thing to do when picking up terrible code written by physicists is to document everything you can, write tests and then start refactoring bit by bit using modern Fortran features (yes, the latest standard is 2023).
Fortran compilers had more than 40 years to become pretty good at generating efficient code; they can make assumptions that are not possible in C (for example, no aliasing) to do so. Besides, most compilers already can do vectorization and autoparallelisation with multithreading, coarrays, and/or openMP, which can be offloaded to a GPU.