logoalt Hacker News

tech_kenyesterday at 8:35 PM2 repliesview on HN

I don't think paradigm shifts have to be 'better' in some march-toward-progress sense, they can be lateral or even regressive in that way and still lead to longer-horizon improvements.

I think also what's practically applicable changes constantly. Perhaps we're truly at the End of Science, but empirically we've been wrong every other time we've said that. My money is that there's more race to run.


Replies

ainchtoday at 1:13 AM

On that note, Terence Tao gave a good interview to Dwarkesh Patel talking about Kepler. He pointed out that the previous geocentric models were actually more accurate than Kepler's at the time, in part because they'd had so much complexity piled on to solve minor errors. Kepler's theory was more elegant, but at the time it wasn't necessarily a better model.

I think important paradigm shifts can often look like this - there's not necessarily a reason to expect them to be instantly optimal. Deep Learning vs 'good old-fashioned AI' is another example of this dichotomy; it took a long time for deep learning to establish itself.

cogman10yesterday at 9:11 PM

> I don't think paradigm shifts have to be 'better'

But they do. Paradigm shifts happen because the new paradigm explains the unexplained and importantly also covers the old model. If prior data is unexplained with a paradigm shift, the shift will never be adopted.

> Perhaps we're truly at the End of Science

Who said that? Just because the core of our current models seem pretty rock steady doesn't mean there's not more science. It simply means that we can mostly just expect refining rather than radical discovery.

There will be sub-paradigm shifts, but there's likely not going to be major "relativity" moments from here on out.

show 2 replies