All models are wrong; some are useful. Cognizance of that is even more critical for a model like exponential growth that often leads to extremely poor predictions quickly if uncritically extrapolated.
I think "are the failures of a simple linear regression on the METR graph relevant" is a much better framing than "does seeing a line if you squint extrapolate forever." As I said, I'd much rather frame the discussion around the actual material conditions of AI progress, but if you are going to be drawing lines I'd at least want to start by acknowledging that no such model will be perfect.