What is it?
It’s an artificial reconstruction, using a network of “neurons”, of how our brains learn things.
Let’s say I want some imaging software to be able to recognise “a dog”. I set up my empty network, and then “train” it, using thousands of images, each one (correctly) tagged “dog” or “not-dog”.
“Surface” neurons pick up the pixels, the next “layers” down look for colour, lines, etc; some layer below that builds a nest that can identify an eye… and so it goes.
Finally, when it’s let loose on random images, it will be able to tell me whether or not there’s a dog in there somewhere.
But here’s the kicker:
There’s no way, as yet, that anyone can find out how it reaches its decision. There’s no code to look through, you can’t probe it, and it can’t talk.
Even the people who build neural networks can’t know how, exactly, they work. There is, almost literally, a ghost in the machine.
Researchers at Google set out to try and spot the ghost, by taking their image-processing software and running it backwards: telling it to draw a dog, in an effort to reveal the network’s idea of “dog-ness”.
This article includes examples of the kind of image – including the one at the top of the page here – that result.
Everybody says they make no sense.
But to some of us, they’re familiar.
Once upon a time I had a go with LSD. For a few hours, under its influence, I experienced “ideas” resembling these images superimposing themselves upon whatever view my eyes were picking up at the time.
Now I know where those images come from. I was, without knowing it, witness to the inner workings of my brain going about their business of making sense of what my eyes were looking at.
Those inner workings construct objects, faces, places, even feelings.
There’s more to what we see than meets the eye…