Over the past 60 years, starting with the original Turing Test, people have kept setting up tests that would show us computers were truly intelligent, but then when some computer passes the test, nobody cares. After going over a bunch of these, Siskind writes:
Now we hardly dare suggest milestones like these anymore. Maybe if an AI can write a publishable scientific paper all on its own? But Sakana can write crappy not-quite-publishable papers. And surely in a few years it will get a little better, and one of its products will sneak over a real journal’s publication threshold, and nobody will be convinced of anything. If an AI can invent a new technology? Someone will train AI on past technologies, have it generate a million new ideas, have some kind of filter that selects them, and produce a slightly better jet engine, and everyone will say this is meaningless. If the same AI can do poetry and chess and math and music at the same time? I think this might have already happened, I can’t even keep track.
So what? Here are some possibilities:
First, maybe we’ve learned that it’s unexpectedly easy to mimic intelligence without having it. This seems closest to ELIZA, which was obviously a cheap trick.
Second, maybe we’ve learned that our ego is so fragile that we’ll always refuse to accord intelligence to mere machines.
Third, maybe we’ve learned that “intelligence” is a meaningless concept, always enacted on levels that don’t themselves seem intelligent. Once we pull away the veil and learn what’s going on, it always looks like search, statistics, or pattern matching. The only difference is between intelligences we understand deeply (which seem boring) and intelligences we don’t understand enough to grasp the tricks (which seem like magical Actual Intelligence).
I endorse all three of these. The micro level - a single advance considered in isolation - tends to feel more like a cheap trick. The macro level, where you look at many advances together and see all the impressive things they can do, tends to feel more like culpable moving of goalposts. And when I think about the whole arc as soberly as I can, I suspect it’s the last one, where we’ve deconstructed “intelligence” into unintelligent parts.
I am most interested in the last one. As a materialist, I do not think there is anything magical about intelligence. It must arise from physical/electrical/chemical stuff going on in our brains. It must, therefore, be simulatable with a big enough computer. And whenever we do understand something our brains are doing, it turns out that there are a lot of subroutines doing fairly simple things that add up to something bigger.
The higher mental activity that I have thought the most about is of course writing. I have a strong sense that the words I type out when I am trying to write fast are emerging from multiple subsystems, one of which does exactly what LLMs do: predicting the next word from those that come before. I am one of the writers whose prose appears in my brain as a rhythm of sounds before the words form, after which some other module chooses words that fit the rhythm but convey the meaning; what brings me to a halt is when the modules clash. At that point, some more conscious module has to intervene to sort things out. This feels amazing when it happens right, words just pouring out of me, but I never have any sense that they are emerging from a deep and true soul. I have a module that remembers how millions of sentences from thousands of books go, and it takes elements from that training data to fit the story I am trying to tell. To the extent that this works well, it is pretty close to automatic.
Apparently when writers take questions from the public, the most common one is, "Where do you get your ideas?" I find this utterly unmysterious. Like an LLM, writers have a huge set of training data: other stories, their own lives, things they have read about in the news. If you went through the average long novel with enough knowledge of the writer's life and a big enough computer you could probably trace the source of every element. The secret to "creativity" is 1) know a diverse set of things, and 2) combine them in interesting ways. I find that this is particularly true when writers are trying to be intensely personal, as in their memoirs; there is nothing in the average memoir that has not been in a hundreds memoirs already.
LLMs can mimic much human behavior because there is nothing magical about what humans do.
G. Veloren's observation about children is on point. My 3 year old grandson, since he began to talk, regularly repeats what someone has just said to him, as though he's committing it to memory in his word bank. And his communication is clear, his vocabulary varied, and his sentence structure already demonstrating skill in constructing compound and complex sentences. (This last is partially explained by the conversations of his nearest adults, all of whom have facility in using language... I'm a retired English teacher, and my daughter has always been an avid reader and in her own work as a designer of training materials must be conscious of communication skills)
ReplyDelete