* Classifying accounts as child accounts (moderated by a parent)
* Allowing account moderators to review content in the account that is moderated (including assigning other moderation tools of choice)
In call cases transparency and enabling consumer choice should be the core focus.
Additionally: by default treat everyone online as an adult. Parents that allow their kids online like that without supervision / some setting that the user agent is operated by a child intend to allow their children to interact with strangers. This tends to work out better in more controlled and limited circumstances where the adults involved have the resources to provide suitable supervision.
At the same time, any requirements should apply only to commercial products. Community (gratis / not for profit) efforts presumably reflect the needs of a given community.
Notice also that even if you do this, you still don't need the service to be able to decrypt the content, only the parent.
This could even be generically useful, e.g. you have a messenger used by business and then the messages can be read by the client company's administrator/manager but not the messaging company's.
so the school takes on that responsibility, as deputized by the parents.
Kids don't get unfettered access to the streets while at school. They can't take their bikes and ride out at will. What makes the internet and devices any different? The devices provided by the school should be lockdown-able, and kids should not be provided their own device unless there's a parental lock (which is enabled during school hours, and is similarly locked down).
Each school brews its own system more or less.
Why is the answer people seem to arrive at being "mandatory collection of blackmail material that will ruin careers and relationships" when it comes to the Internet?
> Surveys by Britain’s tech regulator, Ofcom, find that among children aged 10-12, over half use Snapchat, more than 60% TikTok and more than 70% WhatsApp. All three apps have a notional minimum age of 13: https://archive.ph/y3pQO
Once you get the classification correct — and AI cannot it do this — only via community ombudsman/age verifiers, in a privacy first way*, the app stores can easily tell the app devs what accounts are sensitive and filtering should be much more effective.
*Basically once your age is verified by a real human for your device(using device local encryption to verify biometrics) you are set. No kid should be able to bypass and install apps it on devices that their parents hand to them. There will always be black market devices with these apps, but there are ways of beating those to be very minimal by existing tech.
Why do you need any third parties whatsoever? Just have the parents do it. They configure a setting in the kid's device which the device uses to determine what content to display. All you need from the app/service is a rating for the content. No third parties should never have to know anything about the user, because the user's device knows that, and the device knows it because the parents do.
Who verifies that the person verifying the child's age is actually authorised to do that? Who verifies that verification? And so on up. This needs a chain of trust that can only end up at government. And that chain of trust will then be open to being abused by shitty politicians.
What mechanism in (e.g) Linux is responsible for implementing this age verification so that it cannot be tampered with (or trivially overruled by a sudo call)? Which organisation is legally liable if that mechanism doesn't do its job? How can we stop someone from overwriting that mechanism with their own, in an open OS that is deliberately designed to allow anyone with root to change anything on it?
What you propose here is the death of open computing. And I personally believe that we would be much better off as a species if we kept open computing and just taught our kids how to handle social media better.
This one is easy. You just don't require all devices to do that. The parent isn't required to give the kid a general purpose computer. You don't need to prevent every device from running DOOM, only one device, and then parents who want to impose such restrictions get the kid one of those.
It's ok to drive Dad's truck unless he catches you and tells you no.
Dad should either know his children would never drive the truck without permission, or keep his keys as safe as his wallet (and if he can't trust his kids with keys, you bet his wallet needs protection).