I feel like that is a pretty high bar to call "reasoning". I would like to think I am capable of reasoning, and yet I would not be able to write out by hand a binary file to be loaded on to the controller without using a compiler or looking at an assembly reference manual.
It seems like you want LLMs to be able to use tools (which some of them do. For instance, see the search engine chat bots, which can do searches) and make independent decisions (Search term here is "agent", I don't know how well they work, but I wouldn't personally let my computer do things like that unsupervised). However, I personally wouldn't consider these things to be a prerequisite to reasoning.
I would consider being able to solve a large range of problems that a human could solve with just pencil and paper to be reasoning. LLMs don't really seem to be as good as humans, but the certainly CAN solve these types of problems.