275M Canvas Users Hit πŸŽ“, Vercel Deepsec AI scanner πŸ”, ​Meta drops IG encryption πŸ’¬

TLDRΒ·Β·7 min read
TechnologyAI/MLEngineering
Share𝕏in

AI Summary

ShinyHunters breached Canvas LMS, exposing data tied to 275 million users across nearly 9,000 institutions while Instructure disguised the outage as scheduled maintenance. Vercel open-sourced deepsec, an AI-powered security harness using Claude and GPT-5.5 to find vulnerabilities in large codebases with 10-20% false positive rates. Mozilla used Anthropic's Mythos AI to find 271 Firefox security flaws in two months, with 180 rated exploitable through normal browsing.

Key Facts

βœ“ShinyHunters breached Canvas LMS, threatening to leak data on 275 million users across 9,000 institutions while Instructure disguised the outage as scheduled maintenance.
βœ“Vercel open-sourced deepsec, an AI security harness using Claude Opus 4.7 and GPT-5.5 that found vulnerabilities with a 10-20% false positive rate across large codebases.
βœ“Mozilla used Anthropic's Mythos to find 271 Firefox vulnerabilities in two months, with 180 rated exploitable through normal browsing and almost no false positives.

Author Takes

BearishTLDR InfoSec

Meta dropping Instagram E2E encryption

Meta removing E2E encryption from Instagram DMs leaves users with greater exposure, and Meta has not ruled out using Instagram messages for ad targeting similar to its private AI interactions.

SkepticalTLDR InfoSec

AI vulnerability management vs. bug finding

Powerful AI tools like Mythos will scale vulnerability discovery, but vulnerability management was never about finding bugs β€” it was about fixing them, and current AI tooling lags significantly in remediation.

Contrarian Angle

AI-Powered Honeypots That Impersonate Vulnerable Systems in Real Time

Defenders use generative AI to instantly produce convincing honeypots; a ChatGPT-backed handler directs attacker requests to an LLM that masquerades as the vulnerable system, turning AI-driven attacks into intelligence-gathering opportunities.

Traditional honeypots require manual setup; using LLMs to dynamically simulate vulnerable systems flips the cost equation against AI-powered attackers.

LLM Swarm Bug Hunting Found 20+ CVEs in Core Infrastructure

A researcher built a homegrown swarm of LLM agents that generates vulnerability hypotheses from source code, iterates proofs of concept in isolated VMs, and uses a grader model to filter severity/novelty before human review β€” yielding 20+ CVEs in months.

Traditional security research is manual and slow; autonomous LLM agent swarms operating at machine speed can find real-world kernel and infrastructure bugs faster than human researchers.

More from TLDR

πŸ“°Today⚑FeedπŸ“‘SignalsπŸ’°Capital