Last week, a bipartisan group of lawmakers in the Senate introduced legislation to prevent social media companies from allowing kids under 13 on their platforms.
In recent years, experts and whistleblowers have come forward with evidence about social media’s potential danger to users, particularly its impact on the minds of young people. And recent efforts to ban TikTok have rightly spurred calls to regulate all social media platforms.
In theory, the Protecting Kids on Social Media Act is one step in that direction.
However, even if we assume all the lawmakers co-sponsoring the legislation are doing so in good faith, a couple of things seem clear at the outset: They’re going to have a hard time passing this bill, and an even harder time enforcing it if it’s actually enacted.
First, let’s see what’s in the bill.
As NBC News reports, the bill “would set a minimum age of 13 to use social media apps, such as Instagram, Facebook and TikTok, and would require parental consent for 13- to 17-year-olds.”
A little later in NBC’s report, we get a bit more clarity on what this ban would look like in practice:
The bill would ban social media companies from recommending content using algorithms for users under the age of 18. It would also require the companies to employ age verification measures, and instructs them to create a pilot project for a government-provided age verification system that platforms could use.
Under the measure, the Federal Trade Commission and state attorneys general would be given authority to enforce the bill’s provisions.
On its face, the legislation doesn’t sound bad. But coming from a legislative body that has taken virtually no measures to curb social media use until now, this has the feel of a last-minute school project haphazardly thrown together.
Writing for Wired last week, Matt Laslo laid out some of the barriers to passing this legislation. Specifically, he mentions skepticism among Democrats and Republicans over the…
Read the full article here