I am hoping at some point we can move towards having more nuanced conversations about AI and the role it should play in our world. It seems like currently the only two camps are at either extreme.
Isn't there somewhere between removing AI from the world entirely and just sitting back and letting it take over everything? I want to talk about responsible AI use, and how to mitigate the effects on society, and to account for energy consumption, etc.
I think this is where I sit - I'm personally of the opinion that AI crawlers and thus companies should respect robots.txt, and they shouldn't be trying to scale up to the point of adversely affecting the environment (both natural and supply chain).
I do find value in mindfully using models - perhaps I've got a weird thing to troubleshoot on my Linux server and I just don't want to spend the time or mental effort in tracing it back.
Because I do tend to use AI mindfully, I strongly dislike Microsoft's strategy in constantly pushing their AI solution Copilot. I would rather use it when I feel its right rather than always be reminded its a thing I can use to save time and increase my efficiency around every corner.
This is my take too. When we were imagining AI what were the use cases we had in mind back then? They are these grand visions of AI will take care of major problems. We should be pushing for responsible AI deployment, starting on low risk areas and moving up to more serious uses once we know the tools work for less catastrophic situations.
kinda surprised to see this type of take out of someone who participates on this website. I feel like this is the place where I have seen that middle ground surface the most. Just the overall shift in the past year from semi handwaving to feeling like it must be embraced, and identifying the problems it creates and how to address them. I feel this is all exactly what you are mentioning.
I think AI as a proper utilized tool, is amazing, I think our lack of restraint when just throwing it into everyone's hands without understanding of the tools they are using, is horrifying. I'd imagine a lot of the community here echos that same sentiment, but maybe not, and i am just making assumptions.
Venture capital bet on AI taking over the world, so any conservative usage of LLMs will not get funding in the near future. The subtle reason is that betting on conservative usage of LLMs sends a signal that de-values their primary investments.