I remember a joke from few years ago that was showing an "AI" that was "learning" on its "own" which meant periodically starting from scratch with a new training set curated by a large team of researchers themselves relying on huge teams (far away) of annotators.
TL;DR: depends where you defined the boundaries of your "system".
I think from a proper systemic view that joke is more correct than not. AI is just the frontend of people ...