8 Comments
User's avatar
Suhrab Khan's avatar

Your breakdown really highlights how tricky defining AI can be. I appreciate the examples spanning classic programs to future possibilities

Expand full comment
The Consciousness Lab's avatar

I’ve been thinking recently about our lack of shared definitions being so limiting in…essentially all fields, since language is our best bet for knowledge-sharing/growth. Thanks for your perspective.

Expand full comment
Mike Smith's avatar

I tend to think of "artificial" as engineered, as opposed to evolved. And "intelligence" as prediction in pursuit of goals. Notably this is prediction by the system itself, not by its instincts or programming. In that sense, AI would be any engineered system capable of effective unique predictions.

But I have no idea how well that holds up to the examples of what's been called "AI" over the decades. I suspect it's too hazy a term to be particularly rigorous about it.

Expand full comment
Eric Schwitzgebel's avatar

Yes, it's hazy! What I like about the approach you describe is that it doesn't rely on assumptions that AI must be a matter of digital computation as that concept is sometimes (narrowly) understood.

Expand full comment
Mike Smith's avatar

Thanks! I agree on the digital part. If we did find discrete processing made a crucial difference (which I personally don't expect), I'm sure we'd eventually find a way to design intelligent continuous or hybrid systems.

Expand full comment
Kenny Easwaran's avatar

I'm surprised you didn't talk about Turing's discussion of this in "Computing Machinery and Intelligence"! The famous part of the paper is his discussion of "intelligence", but the section about "artificial" is good too. In order to rule out the usual way of having a baby as counting as "artificial", he toys with the idea of saying that the team that builds it has to all be of one sex, but notes that it's not out of the question that we could some day clone people from a skin cell, and thus just stipulates that he's talking about digital computing machinery, because that's what he's familiar with.

In the AI literacy class I've been teaching, I lean heavily on a definition of "intelligence" that I found in lecture 1 from Crash Course AI (https://thecrashcourse.com/topic/ai/), which is the ability to gather information and use it to make progress towards some sort of goal. Basically all living things count as "intelligent" on this picture, as do very simple systems like thermostats and clocks, and I then just lean on the idea that this can easily be made scalar by seeing how many different types of information the system can gather, and how effectively it can use this information, and for how many different types of goal.

I don't worry so much about "artificial", and just think about the diversity of types of intelligence that there can be. If the clockwork automata, or Frankenstein's monsters, or digital computers, or quantum neural nets, have the same sort of intelligence as humans, with the same types of information they can gather, and the same sorts of effectiveness at applying them to the same sorts of goals, I think that ultimately they won't be that disruptive - we'd have to get used to people that look radically different from us, but they would still be able to function in our society.

But if they can be truly general intelligences that can work with even more types of information than humans towards even more types of goals, that will be radically disruptive, regardless of whether they are biological or mechanical. And even if they are just "jagged intelligences" (as current AI systems seem to be getting towards), where they are radically superhuman at some types of information processing for some types of goals while being radically subhuman at others, that will also be disruptive.

Expand full comment
Eric Schwitzgebel's avatar

That's a very liberal definition of intelligence! But it's possible that that's the heart of the matter, and what we usually think of as intelligence is just a difference of degree on that basic foundation. Of course, if we then think of AI as anything that is artificial and intelligent, the thermostat now counts as AI.

Your final thought is basically where I want to go. We should think more broadly about what the architectures of intelligence might be, not assuming that they need to look like the ones that are familiar or proceed linearly from subhuman to human to superhuman. I like the phrase "jagged intelligence". In an earlier post, I discussed the concept of "strange intelligence" as used by my current PhD student Kendra Chilson in her dissertation on this topic. The forms, and the patterns of success and breakdown, might be increasingly unfamiliar.

Expand full comment
Kenny Easwaran's avatar

My general strategy on lots of concepts is to go very liberal and scalar!

I don't recall where I first heard the phrase "jagged intelligence". This might be one of Andrej Karpathy's coinings (https://x.com/karpathy/status/1816531576228053133?lang=en) like "vibecoding".

Expand full comment