Controversial Kids’ Code aims to keep children safe online
California has passed a bill designed to make the internet a safer place for children. The bill, commonly referred to as the “Kids’ Code”, has been passed by the State Senate. If signed by Gov. Gavin Newsom, it will spring into life.
What is it, and how is it designed to help children be safe online? Perhaps more importantly, why do some people feel the Code may not be all it’s cracked up to be?
From COPPA to Kid’s Code
The US has something called the Children’s Online Privacy Protection Act (COPPA for short). The act:
…imposes certain requirements on operators of websites or online services directed to children under 13 years of age, and on operators of other websites or online services that have actual knowledge that they are collecting personal information online from a child under 13 years of age.
For some time now, the Act has been criticised for having certain shortcomings. The primary issue for most folks is that COPPA is a grey area for teens. This is due to making use of aims which may not necessarily be designed for them. As COPPA is all about dealing with sites and services directly targeting children under 13, the moment an older child uses an app or service designed for someone younger, the COPPA wheels start to come off.
The Kids’ Code aims to fix that. From the text:
This bill would enact the California Age-Appropriate Design Code Act, which, commencing July 1, 2024, would, among other things, require a business that provides an online service, product, or feature likely to be accessed by children to comply with specified requirements, including a requirement to configure all default privacy settings offered by the online service, product, or feature to the settings that offer a high level of privacy, unless the business can demonstrate a compelling reason that a different setting is in the best interests of children, and providing to provide privacy information, terms of service, policies, and community standards concisely, prominently, and using clear language suited to the age of children likely to access that online service, product, or feature.
Extra safeguarding
Online services would need to begin adding additional safeguards for anyone under the age of 18. Although nothing would be in force until 2024, as noted above, requirements include:
Defaulting to the highest possible privacy settings.
Making it obvious if the child using a device is having their location monitored.
Advertising and profiling is a natural additional concern when children are involved. As a result, dark patterns would be prohibited. These are dubious design choices designed to lead unwary device owners to specific choices they may otherwise have avoided. It can be quite manipulative, so it’s a natural target for the bill.
Data Protection Impact Assessments (DPIAs) will also be required for any company which falls under the bill. DPIAs must take into consideration a variety of things, including, but not limited to :
…whether the design of the online product, service, or feature could harm children, including by exposing children to harmful, or potentially harmful, content on the online product, service, or feature,” and “whether the design of the online product, service, or feature could permit children to witness, participate in, or be subject to harmful, or potentially harmful, conduct on the online product, service, or feature.
This will likely require a huge amount of work to pin down correctly, especially for organisations with multiple products potentially in use by young children and teenagers. Is it feasible to be able to do this in time for 2024?
Some reasonable concerns…
Not everyone is entirely on board with the bill’s content. There are fears of mandatory age identification, and the suggestion that children will simply stop making use of new services. This is due to the possible drag effect of having to prove your age and identity on every website.
There is also the question of how, exactly, you verify a child’s age. What valid identification do they have? Could their age be determined by guesstimates due to biometric/facial scanning? The face scanning aspect of this, in particular, is not proving to be particularly popular:
I’ve been writing a lot about the awful, terrible, horrible, #CAKidsCode and how it would be dangerous for privacy with its age verification. But a trade association for the age verifiers reached out to say not to worry… they just want to scan everyone’s faces. Really.
— Mike Masnick (@mmasnick) August 29, 2022
All this additional verified data naturally paints a target on its own back for data theft and fraud attempts. Can the companies collecting and storing this data guarantee it will be properly secured? What happens if or when it’s stolen or leaked?
These are pretty big questions, and at the moment, we don’t really have all of the answers. All we can do is wait and see what direction the bill heads in next.