Nothing. This has never been about protection of children. It is tracking real identity from every source to every destination otherwise known as user-tracking. If this was about protecting children they would require an RTA header on all adult and user-generated content sites and require the most common user agents to look for that header if parental controls are enabled. No tracking, no uploading anything. [1] Sufficient for small children which is more than we have now or will ever have thanks to corporate greed and lobbying.
except this is not true at all
yes there are people which try to systematically hijack child protection laws all the time for stuff like that
but e.g. the californium law is very clearly intended to avoid exactly that (that= tracking real identity)
> they would require an RTA header
they are politicians focused on law making, they have no idea what an "header" even is!
A politicians job is to identify issues consult people with expertise, propose a solution based on this people feedback and then listen to feedback, including from other groups. If they need to know what an HTTP header is and how that works something went really wrong.
But this is also where things often do go wrong, by a) dishonest and outright malicious consulting telling politicians bullshit, b) politicians having a over the top simplified understanding of a topic and think that it's still suited for extrapolating things from it leading them to nonsensical outcomes.
And if then large part of the industry which do care about non abusive solutions loudly refuses to provide any solution and denounce anyone trying to do so you are basically opening even more doors for anyone with malicious intentions. Which is pretty much the situation we have now.
Even worse not only do many people in the tech/hacker community not only not try to help with finding an acceptable solution they often outright reject that there is even a problem.
But there is a problem, a huge one even.
As just one dump example of many: it's currently harder for a teenager to get access to some wholesome soft porn then it is to watch potentially traumatizing and definitely not healthy content (weather it's violence, or certain forms of hard core porn(1)), or access sites/apps with gambling, prying on children, hate mongering, glorification of mobbing etc. etc.
And lets not forget most parents are non technical people, which means most of the reasonable usable and privacy protecting existing tools are not actually usable by them (and not available by default, and they can't reasonable evaluate which ones are okay either).
Also please don't say I grew up with a uncontrolled internet (~25-35y) and I am fine. Putting aside that the internet was very different back then. But also hardly anyone in that age range is truly mentally fine (for a lot of reasons, but that anyway makes it a pretty bad argument).
> RTA header
is insufficient, age isn't just 18+ or 13+. Through many media sites love to pretend that is the case
Furthermore this doesn't work for "feed" content as the server needs to know what to filter one before returning content.
But this is also the direction I have proposed in previous comments and not that far away from the direction the Californian law went to (but very much different to the UK law):
- provide a min. age category indicator for all content (most times by app, sometimes per-content in that app, sometimes per-origin per-content in that app (e.g. YT accessed through the browser). But this needs to be more complicated then 13+,18+ as categories differ by country and you should include tags and some other stuff.
- A parent control API which has a simple/naive default impl. but can be replaced with whatever parent think is right.
- A API to get the users age category (incl. localization, e.g. `us:13`). It needs explicit permissions and providers are not allowed to force it, every contents min-age-constraints still have to go through the parent control app. It's only for selecting content feed/preview. The specific content served might still be rejected by the parent controls! Using it for anything else should be made criminal illegal with personal liabilities for executives. (e.g. using it to try to sniff the exact age date of a person). A implementation which just serves `us:18` but then refused anything >13+ or similar must be treated as a legit possibility, the app must still work in general, but it might not have any further previews. Etc. Etc.
- The trust of age hints/evaluation is anchored solely in the parent controls, the setup of the parent controls is the parents responsibility. Any form of identification(2), AI face scans or similar as a requirement for setting up parent controls/not having a permanent 13+ account or similar _is strictly outlawed_.
- All sold products with preinstalled OS must have a default parent control app which is trivially to setup up in it's default setup and the default setup must only reqiore 1. localization (preset to current country if known, changeable), 2. age to auto adapt the age group where alternative the parent can set the age group, even through that means they have to change it in the future manually (needed for special care children). It also in it's default setup must not track/spy on everything the child does.
- Adult accounts still need compatibility with the APIs but will always provide 18+/yes content allowed.
- Products and e.g. downloadable OSes can decide to be "adult only", in which case their access must be guarded like any other adult only content (e.g. when you buy it) but in which case they don't need to support child accounts and can instead return hard coded 18+/yes content allowed.
(This is already the short(er) version :/, e.g. most countries have a 18-21 category, for many countries that category is only irrelevant for things anyway involving a identification (e.g. signing certain contracts, doing certain jobs), but e.g. the US relation to alcohol is an exception).)
---------
(1): And I don't mean just a bit of soft bondage, but things which will lead to serious health issues long term and/or involve violence, glorification of violence, suppression, misogyn, implications of torture, rape, child abuse or in case of drawn/generated content non-implications and even snuff.
(2): There can be some acceptable ways, e.g. a clerk checking your ID IRL, without recording anything except yes/no. Digital ID setups which only communicate adult yes/no without identification etc. But given that all relevant devices tend to be too expensive for children to buy them themself and you also should trust your child if it approaches adulthood (and might have the money) I don't think anything like that is really needed. In general this should focus on efficient solutions for age group <16. IMHO if you still need parent controls for 16+ you messed up parenting.
An API actually means that more and more details about the user will inevitably be added with time. This is a user-trackers dream come true. No thanks. One static header, done and dusted.
But this needs to be more complicated then 13+,18+
I will never agree with this nor will most people. Content is either adult or not adult. That is how existing parental laws are structured in most countries. The parent must decide if the child is ready to view content that is rated anything other than "G". The parent decides, not some app, not some API.
A child account on a tablet, phone, laptop need only prevent tampering with browser settings and by default enable parental controls which in turn simply look for an RTA header or any other indicators that the site or content is adult or user-generated in nature. Keep it simple. If people wont enable looking for a header then the only reason they would go far further and screw with an API would be if it were to the benefit of evil. (marketing, sales, manipulation of the child, manipulation of the parent).
It doesn't hand over control of computing to governments
Also I'm not convinced that borrowing a device presents a new or different failure mode. Children could always obtain physical contraband from their friends so nothing has changed here.
also doesn't need to IMHO
it solves the problem of it being too trivial for a 12 year old to access content which at best is quite problematic and at worst outright traumatizing
as in, the same reason we have laws that a clerk glances at the age on you id if you look young and buy alcohol but your parent are still allowed to let you drink with them if they think its right (or what a 16+ movie with them etc. etc.)
this is also why it really shouldn't be anything much more fancy then parent controls checking min age of content locally / indication of age for feed fetching. Everything else is disproportional (unrelated from all the other issue it might have).