I took that to mean ≈ "Amount of training data isn't the big factor dwarfing all else." Depends who "we" refers to, I guess. Back when LLM-generated code was new, I definitely saw predictions that LLMs would struggle with niche or rarely used languages. These days, consensus among colleagues within earshot is that LLMs handle Rust much better than Python or C++ (corpus size and AutoCodeBench scores notwithstanding).