I'm on a very similar train. You cannot dump all the data into an LLM (for many reasons) and we also already have clearly defined rules that an LLM doesn't have to figure out.
So keep organizing data (LLM powered, of course), so that you can query data as usual (multi modal, so not just graphs, but also time series, relational, etc). Feed that to deterministic computations. Let an LLM reason about the outcomes.
Give the LLM the freedom to orchestrate the retrieval and computations. Make sure the way it orchestrates it is auditable.
The key thing I want to achieve is beyond this system: I want to uncover hidden things in the system (missing in the ontology, computations, etc) and propose to add these. This will effectively give you a generic approach to create ever evolving systems aliging with reality while being fully auditable.
The last part we're very excited by too: using orchestration logs and failure traces to surface gaps in the ontology and propose extensions. Early days, but that's where the architecture compounds, the system gets more complete every time it's used.