For those in the finance space, are you actually seeing any real AI tools being used? Like for actual operational tasks?
I've really only seen it used for research / exploration thus far. Either for economic research slide deck or for exploring trading hypothesis
Yes. On the accounting side agents can handle a lot of the low value work like recons and other ledger activity pretty well. On the investment side I think like you pointed out it’s going to be a lot of research, industry, company, macro etc. Value in letting run on top of the data you have and put together ideas at a quicker pace than a human can. There is still a human in the loop but it can do a nice job of lining up thought you might have otherwise missed.
Pretty good as a dev with finance stakeholders. We have skills in place acting over our automated month closing and it was able to provide manual checks and flag issues, for example.
Nowhere near self sufficient tools though, just great to answer questions over the data that would usually take a few hours of custom scripting/excel. I wouldn't trust our stakeholders using AI directly either, being frank.
Seen it used in some of the fraud models (I work in insurance). So that's both from the perspective of people trying to claim fraudulently and from suppliers over charging. I can't say how much of a lift we actually get vs existing ML models
Nope If anything firms are pulling back (I know someone closely who works at blackrock).
> For those in the finance space, are you actually seeing any real AI tools being used? Like for actual operational tasks?
> I've really only seen it used for research / exploration thus far
Summaries and translation for sure.
Speaking with devs in the field I know that AI tools are used to summarize and extract data from... PDFs. Now, thankfully, LLMs got better at answering "How many 'r' in 'strawberry" and it looks like they're good enough for summarizing PDFs and extracting key numbers but I'd still be cautious.
And I've got a friend who's a translator specifically for financial documents: she's a contractor and getting about 1/10th of the work (and 1/10th of the pay) she used to have for now she's only tasked to verify that the translations are correct. Of course she already had lots of tools, way before he LLM era, automating some of her work but she was still billing he use of those tools. Now LLMs are doing nearly all the work and not "for her": it's happening upstream and she only gets the output of the LLMs and has to verify them. And there aren't that many errors.
We’re integrating AI tooling into the Bloomberg Terminal for everyone to use.
https://www.bloomberg.com/professional/insights/press-announ...
On the spend management side of things, I've found pretty remarkable success in letting LLMs check "does this receipt match this reimbursement request and based on all the information about the user, the request, and our policy, is it appropriately allocated to appropriate GL, Location, Department, and Project codes?" If the verification step fails, it kicks it back and the user can either override it (which gets it flagged for AP review), or fix it. It does substantially better than the naive Bayes classifier I was using before.