As is common in hard real time code, there is no dynamic allocation during operation:
allocation/deallocation from/to the free store (heap)
shall not occur after initialization.
This works fine when the problem is roughly constant, as it was in, say, 2005. But what do things look like in modern AI-guided drones?There are missiles in which the allocation rate is calculated per second and then the hardware just has enough memory for the entire duration of the missile's flight plus a bit more. Garbage collection is then done by exploding the missile on the target ;)
What you are actually doing here is moving allocation logic from the heap allocator to your program logic.
In this way you can use pools or buffers of which you know exactly the size. But, unless your program is always using exactly the same amount of memory at all times, you now have to manage memory allocations in your pool/buffers.
"AI" comes in various flavors. It could be a expert system, a decision forest, a CNN, a Transformer, etc. In most inference scenarios the model is fixed, the input/output shapes are pre-defined and actions are prescribed. So it's not that dynamic after all.
How do you think these modern AI-guided drones use their AI? What part of the drone uses it?
Why would the modern environment materially change this? The initialized resource allocation reflects the limitations of the hardware. That budget is what it is.
I can't think of anything about "modern AI-guided drones" that would change the fundamental mechanics. Some systems support very elastic and dynamic workloads under fixed allocation constraints.