I wrote a gui app once that ran on a safety-critical platform. I ended up stuffing a rendering of the gui (rendered offscreen) into shmem at I think 24hz, and rendered that screenshot into the safety critical application. I passed clicks (no typing for this gui) back from the statically rendered image updating on a cadence, to the offscreen GUI.
Worked well. Not quite the same as this, but that’s what this reminds me of.
I don't think I follow. What is that giving you that you wouldn't get by just having the user click in the application and see its real interface directly? Or are you saying you were embedding one application inside another?