I'm from the kind of place in Western Australia where the sky is bigger than the town. Red dirt, mine sites, not much else. Somewhere between there and a PhD in cognitive neuroscience I got fixated on a single question: how does a brain hear a sentence and know what to do?
I spent years in the scanner room mapping that process — watching prefrontal cortex light up as people turned instructions into action. Then I left academia and discovered the same problem wearing different clothes: language models parsing prompts, agents executing tool calls, systems where a misplaced word isn't just confusing — it's a vulnerability.
Now I build and break these systems for a living. I've led AI teams in insurance and mining, published on prompt injection and reasoning model safety, and run a small consultancy for companies that want to deploy AI without ending up in the newspaper.
This site is where I write about what happens at the boundary between cognition, language, and machine behaviour. The blog posts are regular thinking-out-loud. The labs pieces are something else — interactive, hand-built, designed to be experienced rather than skimmed.
I hike too much, teach when I can, and spend more time than is healthy thinking about consciousness.
Get in touch: gareth.roberts@ieee.org · LinkedIn · GitHub