It wasn't a neuron but a cluster of neurons, according to the article, and I believe this kind of stuff is generally best done and talked about at the level of latent space, not actual neurons. It's already been shown that the LLMs encode concepts along the dimensions of the (extremely highly dimensional) latent space. "King - Man + Woman = Queen" is old school; there's been demos showing you can average out a bunch of texts to identify vectors for concepts like, say, "funny" or "academic style", and then say have LLM rewrite some text while you do the equivalent of "- a*<Academic style> + b*<Funny>" during inference to make a piece of scientific writing into more of a joke.
I'm surprised we don't hear more about this (last mention I remember was in terms of suppressing "undesirable" vectors in the name of "alignment"). I'd love to get my hands on a tool that makes it easy to do this on some of the SOTA OSS models.