Computers can tell when you’re looking at them, or in a hurry, or demonstrating to a group. They behave differently when you’re heightened and febrile. Sure, you could explain this as an illusion, one of the cognitive biases that business books like to talk about, just a way of explaining away errors you don’t know you’re making. When it happens to me, on a zoom call or preseenting, I joke about the undiscovered particles – the interferons – that stressed human brains emit, which get tangled with the complex, unfathomable processes of computation, and make things go wrong.

What if this was true, though, or something like it? Wouldn’t this be a better route to understanding machine intelligence than all this arithmetic? What if we started by understanding when computers can empathise with us, when they respond to our feelings and inner states? The uncanny abilities of LLMs and GANs have really only shown us that we’ve inflated our sense of how special our second-rate thinking was. It turns out lots of what we call intelligence is really just reproducing cliches and banal associations. Statistical language processing isn’t intelligence. Reading and anticipating other beings that aren’t like you, that’s intelligence. Ask any cat owner.