Is it bullshitting to perform nearly perfect language to language translation or to generate photorealistic depictions from text quite reliably? or to reliably perform named entity extraction or any of the other millions of real-world tasks LLMs already perform quite well?
Picking another task like translation which doesn't really require any knowledge outside of language processing is not a particularly good way to convince me that LLMs are doing anything other than language processing. Additionally, "near perfect" is a bit overselling it, IMX, given that they still struggle with idioms and cultural expressions.
Image generation is a bit better, except it's still not really aware of what the picture is, either. It's aware of what images are described as by others, let alone the truth of the generated image. It makes pictures of dragons quite well, but if you ask it for a contour map of a region, is it going to represent it accurately? It's not concerned about truth, it's concerned about truthiness or the appearance of truth. We know when that distinction is important. It doesn't.