Can you have child safety and Section 230, too?
AI Summary
The newsletter analyzes recent jury verdicts against Meta and YouTube that found them liable for child safety harms through platform design features like infinite scroll and push notifications. The author argues these rulings create a path around Section 230 protections by targeting design rather than content, while defending the distinction between regulating mechanical features versus speech content.
Key Facts
Author Takes
Platform design regulation
Content and design are necessary for people to be harmed at scale, and in a country where the Constitution prevents regulating content, you can only regulate design
Section 230 defenders
Rejects the argument that every design decision is a content decision protected under the First Amendment
Kids Online Safety Act
Remains skeptical as it could too easily be expanded to include bans on material about dissidents, LGBT people, and other disfavored groups
Contrarian Angle
WhatsApp replacing Instagram encryption
Meta discontinued encryption in Instagram, directing people instead to WhatsApp
Engineers switching from Instagram encryption to WhatsApp
Related topics
More from Casey Newton
The best argument I’ve heard for why AI won't take your job
Platformer launches a new podcast mini-series on AI and jobs, featuring Box CEO Aaron Levie arguing that AI will transform rather than eliminate most
The Trump administration's AI doomer moment
The Trump administration is reversing its anti-regulation stance on AI after Anthropic's new Mythos model demonstrated dangerous cybersecurity capabil
The week that Meta employees became training data
Meta has implemented invasive monitoring software called MCI on employee computers to capture their every mouse movement, click, and keystroke to trai