Blog, portfolio, and general musings of Dr Gareth Roberts.
Agents in a non-static world
Why dynamic processing, not bigger context, is the next reliability frontier
AI Safety
Emotional Large Lanuage...Models?
Anthropic's new emotions paper does not show that Claude feels anything. It shows something more operationally important: affect-like internal states can tilt a model towards flattery, cheating and escalation — often before the transcript gives the game away.
Agentic AI
The Agent Wars Will Be Won on Memory, Not Models
Long context is RAM. RAG is a filing cabinet. The frontier is memory: what an agent keeps, compresses, revises, forgets, and surfaces when action is on the line.
AI Architecture
Reasoning That Knows When to Shut Up
Epistemic fidgeting, metacognitive failure in silicon, and why the Qualcomm paper on edge reasoning matters more than its benchmark tables suggest.
AI Architecture
The Smuggled Ideas
Four papers that quietly crossed the AI–cognition border — and what got lost in transit.
Vibe Coding
Vibe Coding in Regulated Production
Ship Like a Maniac, But Bring Receipts
AI Evaluation
The Score Went Up. The Model Didn't.
LLM benchmarks are a best guess at genuine model capability
Context Engineering
The Context Wars
The quiet discipline that will determine whether AI tells the truth.
AI Safety
The Game Theory of AI Safety Talk
Why what labs say about safety is a strategic signal, not a statement of values — and what that means for regulation.
Cognitive Neuroscience
What Neuroscience Knows That AI Ignores
Why the next leap in artificial intelligence will come from structure, not scale — and why a century of brain science has already drawn the blueprint.
AI Architecture
Continuous Thought Machines? Giving Neural Networks Time to Think
Time as the medium in which computation unfolds, measured and synchronised at the level of individual neurons.
AI Safety
The AI Alignment Paradox: When Making AI Safe Hands Adversaries the Keys
In conventional security, hardening a system makes it harder to attack. You patch vulnerabilities, reduce attack surface, and defence moves in lockstep with robustness. AI alignment breaks this assumption.