Within 10 minutes earlier today, I took 1.5 years of raw financial trading data and generated performance stats, graphed out return distributions, ran correlation analysis, and to top it off created monte-carlo shock tests using the base data as an input for the model and ran hundreds of simulations with corresponding charts.
Each of the 15 charts would have been a page of boilerplate + Python, and frankly there was a huge amount of interdisciplinary work that went into the hundreds of thought steps in the deep reasoning model. It would have taken days to fill in the gaps and finish the analysis. The new crop of deep reasoning models that can do iteration is powerful.
The gap between previous "scratch work" of poking around a spreadsheet, and getting pages of advanced data analytics tabula rasa, is a gap so large I almost don't have words for it. It often seems larger than the gap between pen and paper and a computer.
And then later, off of work, I wanted to show real average post-inflation returns for housing areas that gentrify and compare it with non-gentrifying areas. Within a minute all of the hard data was pulled in and summed up. It then codes up a graph for the typical "shape of gentrification", which I didn't even need to clarify to get a good answer. Again, this is as large a jump as moving from an encyclopedia to an internet search engine.
I know it's used all over finance though. At Jane Street (upper echelon proprietary trading) they have it baked into their code development in multiple layers. In actual useful ways, not "auto completion" like mass market tools. Well it is integrated into the editor and can generate code, but there is also AI that screens all of the code that is submitted, and an AI "director" tracks all of the code changes from all of the developers, so if a program starts failing an edge case that wasn't apparent earlier, the director will be able to reverse engineer all of the code commits, find out where the dev went wrong, and explain it.
Then that data generated from all of the engineers and AI agents is fed back into in-house AI model training, which then feeds back into improvements in the systems above.
All of the dismissiveness reminds me of the early days of the internet. On that note, this suite of technologies seems large. Somewhere in-between the introduction of the digital office suite (word/excel/etc) and perhaps the Internet itself. In some respects, when it comes to the iterative nature of it all (which often degrades to noise if mindlessly fed back into itself, but in time will be honed to, say, test thousands of changes to an engineering Digital Twin) it seems like something that may be more powerful than both.
This points to research I’ve seen repeatedly now where people who are intelligent enough to use AI get 10x from it than those who can’t imagine what to use it for.