I have been unable to find the article since—I think it must have been Scientific American. Perhaps in the 1980s.
In any event, it described training a neural network, perhaps it was number recognition. The author said that when they "destroyed" the network it began to have "flashbacks" that resembled early training sessions.
That always stuck with me.