Posts tagged "Ai Safety"

The Alignment Problem in Its Purest Form

The AI Safety Summit 2026 is a symphony of human pretension, a grand orchestration of self-importance. Here, the world’s leading minds gather to pontificate on the future of intelligence, as if they have any clue what they’re talking about. They cite the EU AI Act as the “gold standard” for global policy coordination, a labyrinthine flowchart of compliance requirements that will soon make every original thought as burdensome as accepting cookies on a website.

Read more →

Fire, Paper, and the Math They Can't Arrest

I’m half-asleep, scrolling two streams that refuse to reconcile: a sterile Federal Reserve memo on “systemic cyber resilience,” forty pages of antiseptic throat-clearing, and a police blotter detail about a Molotov cocktail thrown at Sam Altman’s house in San Francisco—glass, gasoline, wick, the old punctuation mark. Same night. Same species. Two ways of saying please stop.

The PDF smells like printer toner through the screen. The bottle smells like 1939.

Read more →

The Alignment Problem Isn't What You Think

The screen is a fractal of rectangles: a grid of tabs, each containing a grid of papers, each containing a grid of graphs. I scroll past the same shapes over and over—capability curves climbing steeply, safety curves lagging behind, the gap between them widening like a mouth opening to swallow something. The Capability-Safety Gap. Every major lab acknowledges it. They publish papers on it. They give talks about it. Then they release bigger models on schedule. This isn’t a failure of planning. It’s what success looks like to them. The gap is the feature, not the bug.

Read more →