How does that even begin to make sense?
I want to protect my child from X type of content -- one of many jobs of a parent, but I will trust all content to self report to be child inappropriate? "Inappropriate" is entirely subjective and can not be defined as some sort universal bool -- and that's before you get to the point of actively malicious actors like Meta and Tiktok actively exploiting children for their content farms generation and ad impression factories.
If the user owns and controls their computers -- as they should -- then that subjective content filtering layer belongs there, in the owners control. If its a child's, then the parent owns the device, not the child.
The idea is that society should have some common standards for what's inappropriate for children. For example, parents don't want their kids to buy cigarettes, but also, stores don't want to sell them cigarettes. When there's consensus on this, cooperation is possible. Parents have an easier time when they get cooperation from the rest of society.
But there isn't going to be consensus on everything, so content filters are still needed.