Can you replicate an algorithm just by looking at its inputs and outputs? Yes, sometimes.
Will it be a full copy of the original algorithm - the same exact implementation? Often not.
Will it be close enough to be useful? Maybe.
LLMs use human language data as inputs and outputs, and they learn (mostly) from human language. But they have non-language internals. It's those internal algorithms, trained by relations seen in language data, that give LLMs their power.