theyre going to push "AI on the edge" and "IoT" nonsense again
absolutely unbelievably cooked. anyone pushing that nonsense, short with leverage.
low latency connectivity + goliath data centres will always beat on-device inference/training.
> low latency connectivity
That's not exactly easy. I doubt on-device training will become much of a thing. But on-device inference is desirable in all sorts of distributed use cases. We're still a long way off from reliable internet everywhere. Especially when you want to start pushing large quantities of sensor data down the pipe.
I can't even get reliable internet on my phone in the centre of London.
Not necessarily. There are lots of use cases for on device AI inference. I run YOLO on an Nvidia Jetson powered Lenovo Think Edge, which processes incoming video at full frame rates on four channels with recognition and classification for a bespoke premises security system. No clouds involved other than the Nix package manager etc. Make sure your argument May carry more weight when you're talking about ultra low power devices like an Arduino running AI inference locally that seems like more of a stretch.
> low latency connectivity + goliath data centres will always beat on-device inference/training.
Except that it's not always an option...
SOOOO buy Qualcomm. The second they start talking about AI-IOT stock is gonna sky rocket.
We live in a broken world.
Low latency, low power, portable
pick two.
well actually you can't really, low latency is pretty hard to do full stop
tf are you on. just look at meta display glasses. it s all on board compute
Realtime and offline would like a word.
> "AI on the edge" and "IoT" nonsense again.
I love it when my device stays dumb (or at least connect-local) and not become abadonware 6 months after release because the cloud provider felt it a chore to keep running.