The Score Went Up. The Model Didn't.

The Score Went Up. The Model Didn't.

LLM benchmarks are a best guess at genuine model capability
The Game Theory of AI Safety Talk

The Game Theory of AI Safety Talk

Why what labs say about safety is a strategic signal, not a statement of values — and what that means for regulation.
The AI Alignment Paradox: When Making AI Safe Hands Adversaries the Keys

The AI Alignment Paradox: When Making AI Safe Hands Adversaries the Keys

In conventional security, hardening a system makes it harder to attack. You patch vulnerabilities, reduce attack surface, and defence moves in lockstep with robustness. AI alignment breaks this assumption.
Architecture Wars: How Physics Shapes AI Strategy

Architecture Wars: How Physics Shapes AI Strategy

The pursuit of AI supremacy has reached an inflection point where fundamental physics, rather than algorithmic ingenuity alone, dictates competitive advantage.