I don't see why AI would be able to help you solve all your legacy code problems.
It still struggles making changes to large code bases, but it doesn't have any problems explaining those code bases to you helping you research or troubleshoot functionality 10x faster, especially if you're knowledgable enough not to take it at its responses as gospel but willing to have the conversation. A simple layman prompt of "are you sure X does Y for Z reason? Then what about Q?" will quickly get to them bottom of any functionality. 1 million token context window is very capable if you manage that context window properly with high level information and not just your raw code base.
And once you understand the problem and required solution, AI won't have any problems producing high quality working code for you, be it in RUST or COBOL.
Would not be able to help?
In my experience with Legacy Code projects the problem is very rarely "what is this code doing?" Some languages like VB6 (or even COBOL) are just full of very simple "what" answers. Obfuscation is rare and the language itself is easy to read. Reading the code with my own eyes gives me plenty of easy enough answers for the "what". LLMs can help with that, sure, but that's almost never the real skill in working with "legacy code".
The problem with working with legacy code, and where most of the hardest won skills are, is investigating the "how" and the "why" over the "what". I haven't seen LLMs be very successful at that. I haven't seen very many people including myself always be very successful at that. A lot of the "how" and the "why" becomes a mystery of the catacombs of ancient commit messages and mind reading seance with developers no longer around to question directly. "Why is this code doing what it is doing?" and "How did this code come to use this particular algorithm or data structure?" are frighteningly, deeply existential questions in almost any codebase, but especially as code falls into "legacy" modes of existence.
Some of that becomes actual physical archeology that LLMs can't even think to automate: the document you need is trapped in a binder in closet in a hallway that the company sealed up and forgot about for 30 years.
Usually the answers, especially these days, were never written down on anything truly permanent. There was a Trello board that no one bothered to archive when the project switched to Jira. Some of the # references seem to be to BitBucket Issues and Pull Requests numbers, was the project ever hosted on Bitbucket? No one archived that either. (This is an old CVS ID. I didn't even realize this project pre-dated git.) The original specs at the time of the MVP were a whiteboard and a pizza party. One of the former PMs preferred "hands on" micro-management and only ever communicated requirements changes in person to the lead dev in a one hour "coffee" meeting every Wednesday and sometimes the third Thursday of a month. The team believed in a physical Kanban board at the time and it was all Post-It Notes on the glass window in the conference room named "Cactus Joe". I heard from Paul who was on a different project at the time that Cathy's cube was right next to that window and though she was only an Executive Assistant at the time she moved a lot of those Post-It Notes around and might be able to tell you stories about what some of them said if you treat her to a nice lunch.
Software code is poetry written by people. The "what" is sometimes just the boring stuff like does every other line rhyme and are the right syllables stressed. The "how" and "why" are the stories that poetry was meant to tell, the reasons for it to exist, and the lessons it was meant to impart. Sometimes you can still even read some of that story in the names of variables and the allegories in its abstractions, when a person or two last shaped it, as you start to pick up their cultural references and build up an empathy for their thought processes ("mind reading", frighteningly literally).
That's also why I fear for LLMs only accelerating that process: a hallway with closets getting bricked up takes time and creates certain kinds of civic paperwork. (You'll discover it eventually, if only because the company will renovate again, eventually.) Whereas, a prompt file for a requirements change never getting saved anywhere is easy to do (and generally the default). That prompt file probably wasn't kicked up and down a change management process nor debated by an entire team in a conference room for days, human memory of it will be just as nonexistent as the file no one saved. LLMs aren't even always given the "how" or "why" as they are from top to bottom "what machines", that stuff likely isn't even in the lost prompts. If a team is smaller or using a "Dark Software Factory" is there even reason to document the "how" or "why" of a spec or a requirement?
In further generalization, with no human writing the poetry the allegories and cultural references disappear, the abstractions become just abstractions and not illuminating metaphors. LLMs are a blender of the poetry of many other people, there's no single mind to try to "read" meaning from. There's no clear thought process. There's no hope that a ranty monologue in a commit message unlocks the debate that explains why a thing was chosen despite the developer thinking it a bad idea. LLMs don't write ranty monologues about how the PM is an idiot and the users are fools and the regulatory agency is going to miss the obvious loophole until the inevitable class action suit. Most of those are concepts outside of the scope of an LLM "thought process" altogether.
The "what is this code doing" is the "easy" part, it is everything else that is hard, and it is everything else that matters more. But I know I'm cynical and you don't have to take my word for it that LLMs with "legacy code" mostly just speed up the already easy parts.