The little person at the control panel, the one who sees what the retina produces, the one who decides, the one who speaks up…
(This is the dualist solution to the free will problem–yes, I have a physical body, they say, but I also have a little human inside of me that gets to make free decisions separate from that…)
Anthropomorphism is a powerful tool. When we encounter something complex, we imagine that, like us, it has a little person at the controls, someone who, if we were at the control panel, would do what we do.
A tiger or a lion isn’t a person, but we try to predict their behavior by imagining that they have a little person (perhaps more feline, more wild and less ‘smart’ than us) at the controls. Our experience of life on Earth is a series of narratives about the little people inside of everyone we encounter.
Artificial intelligence is a problem, then, because we can see the code and thus proof that there’s no little person inside.
So when computers beat us at chess, we said, “that’s not artificial intelligence, that’s simply dumb code that can solve a problem.”
And we did the same thing when computers started to “compose” music or “draw” images. The quotes are important, because the computer couldn’t possibly have a little person inside.
And now, LLM and things like ChatGPT turn this all upside down. Because it’s essentially impossible, even for AI researchers, to work with these tools without imagining the little person inside.
The insight that might be helpful is this: We don’t have a little person inside of us.
None of us do.
We’re simply code, all the way down, just like ChatGPT.
It’s not that we’re now discovering a new sort of magic. It’s that the old sort of magic was always an illusion.