Sunday, August 17, 2025

A Discussion about Intelligence

I attended a discussion last night focused on the question, "What is Intelligence?" Nine people attended, three of whom work in AI. 

We started from a very simple definition: that intelligence is that ability to take in information and use it to generate some result, that is, information processing. By this definition, of course, all sorts of things are intelligent, from hand-held calculators to trees. This was kicked around but most of us were willing to assign some degree of "intelligence" to very simple organisms and devices. For example, no one disputed that mice are intelligent.

One participant was focused on the notion that intelligence is inference, the ability to look at data and draw from it a conclusion that is not obviously present in the source. I get that this is a good way to think about what AI can and cannot do, but am not sure how it can really be distinguished from information processing at a fundamental level.

Incidentally it seems that when professionals think about the usefulness of LLMs, they regularly employ the "intern test." If you say, "LLMs are not smart, you can't even trust them to do X," somebody will reply "I would never trust an intern to do that."

There was some discussion of speed as a factor. Some people want to say that computers aren't smart, they are just fast, but as was pointed out we often use speed as a way of juding how intelligent things are. E.g., it took this dog an hour to learn this new trick, but it took that dog a month, so this one is smarter.

One of my favorite questions got discussed: can you say that something is intelligent from the outside, based solely on its output, or do you want to posit some internal state of mind? E.g., some people say that while an LLM can produce what looks like intelligent output, it is not truly intelligent, because it has no understanding. It can search for words, but it does not think. In a related point, someone mentioned the ideas of a philosopher who, in thinking about intelligence, assigns much importance to the sense of self; would you call something intelligent that has no idea that it even exists? 

You can see the importance of that last question with regard to vast, vague systems. When we were talking about the ability of fungi to solve mazes, I said, in that case would you want to say that the intelligence resides, not in the fungus, but in the evolutionary system that created it? I mean, we couldn't even make a single dog, but evolution has made a thousand different kinds of dogs, besides all the other stuff. But there was a lot of reluctance to assign "intelligence" to evolution.

This relates to the question of goals; people are often unwilling to assign intelligence to AI, because it cannot set its own goals. But if the goals of, say, a mouse are set by evolution, how is that different from humans assigning goals to AI?

The AI people were focused on two points that I found interesting. First, there is the "Lookup Table" problem. If your system is just using its ultra-fast processor to look up answers in a huge database, most people would not consider that intelligent. It was generally agreed that IBM's old Deep Blue chess program was not intelligent, because it was basically just looking up situations and moves in its database. This is akin, of course, to the old Chinese Room problem, and I found it a good sign for the status of our civilization that nobody felt any need to debate the Chinese Room.

The second point was about the complexity of the algorithm. The history of AI is full of systems that seemed intelligent, in limited circumstances, but turned out to be employing very simple algorithmic tricks. LLMs, by contrast, are highly complex, so much so that we often have no idea how they do what they do. Human brains are astonishingly complex. Should that be part of our definition of intelligence? One way to think about this is "compressibility": what is the shortest statement, in language or computer code or whatever, that could describe the operations of a brain or device? Do we want to say that any truly intelligent system should be too complex to be fully described in a simple way?

But in that case, someone said, are you putting the emphasis on mystery, saying that only systems we don't understand should be considered intelligent? Someone else said, yes, absolutely, if superintelligent aliens showed up who found it very easy to describe how our brains work, they would not consider us intelligent.

On the whole it was a fine way to spend two hours on a weekend evening, maybe not as much fun as a really great movie, but much better than a mediocre one.

2 comments:

G. Verloren said...

Incidentally it seems that when professionals think about the usefulness of LLMs, they regularly employ the "intern test." If you say, "LLMs are not smart, you can't even trust them to do X," somebody will reply "I would never trust an intern to do that."

This is such an absurd comparison to make.

The point of an intern isn't to have them do the advanced work you need done by an expert. The point of an intern is to invest in them, and turn them into an expert in the process, whose specialized labor and knowledge you can then benefit from employing

(Unless you're the sort of cynical corporate-culture nut who subscribes to the modern competing viewpoint that the point of an intern is to have someone who you don't have to actually pay money to, and whom you can exploit as disposable free labor, and sucker into doing the jobs no one else wants to do by making them bullshit promises about how it will "look good on their resume". But I digress...)

Internships exist because otherwise we suffer labor pool shortages in specialized roles which people don't otherwise have logical reasons to acquire the relevant skills for. See medical training - the original source of the entire concept.

In complete contrast, there is no way to turn an LLM into an expert, ever. The very best you could ever hope to get out of an LLM is only on par with the very worst you could get out of a totally untrained human novice.

If the most you can possibly hope for out of a technology is to replicate the output of an utter schlub, you should be seriously reconsidering the entire existence of said technology. (Unless your goal is to have a machine do the worthless tasks you can't be bothered to do yourself, and which you also are unwilling to actually pay someone else to do. If an LLM could somehow magically refill your stapler and get you a coffee, etc, that'd honestly be preferable to actual humans being conned into doing drudge work for nothing.)

Ted Chu said...

Did any mention during your discussion that there's degrees of intelligence? It's not black and white.