Most languages don't offer the ability to arbitrarily grow the stack, so it should be straightforward to compute an upper bound on any given function's stack usage. C is a bit harder, you need to forbid alloca, as well as goto and setjmp/longjmp (because I think you need to ensure that control flow is reducible in order to do this analysis).
But the problem then is that the existence of recursion in every language means that even if you know the size of every function's stack, you can still have an arbitrary amount of stack usage due to recursive function calls, so you need to forbid recursion as well.
And that only gives you guarantees WRT to the stack, so you'll probably also want to forbid general heap allocations (possibly replacing them with some fixed-size static buffers).
Recursion can cause problems (though it's easy to detect if you can make a callgraph), but the harder one in most cases is constructing your callgraph in the face of function pointers and other runtime abstractions. It's possible to do a worst-case analysis if you have types that are constrained enough to reduce the possible targets of such a call to a reasonably small set, but most static stack analysis tools bail on trying to analyse this at all.
Well I'm not sure about forbidding heap allocations, that would severely limit what you can do with a function. In low level languages like Rust or C it would be difficult to keep track of the total size of heap allocations in a performant way, but in e.g. Python it should be possible to add some tracing so that a function can only allocate X bytes, and beyond that throw an error or log a warning.
It would be great if we can mark some functions as non-Turing-complete, and avoid recursion. Would make it easier to reason about them.