regarding “Strange Intelligence”: Is it possible that what makes AI strange is that it is un-situated?
Biological intelligence is a fused scaffold :: it’s inseparable from the body that hosts it. The octopus is its tentacles.
But AI feels “strange” because it feels detachable.
It is a ghost that doesn't need a specific shell.
Perhaps “ Familiar Intelligence” is just intelligence that is trapped in a specific form, whereas “Strange Intelligence” is fluid, capable of being poured/ported into any server or sensor. It’s not just a different shape ..it has potential as a different state of matter.
That's an interesting way of thinking about it. But an alternative way of thinking about it is that AI *is* embodied, just in a radically different way -- distributed across space, with sensors and effectors reaching to our keyboards and computer screens.
> AI is substrate independent. It can be paused, copied, and rebooted on entirely different hardware.
This isn't exactly right - you can't just take an LLM that is designed for running on GPUs and run it on traditional CPU chips and hope for it to be anywhere near as effective at real-time problem solving.
It's true that you can take a particular model running on one datacenter and then upload it to a different datacenter with the same sort of chips, so that there's even more flexibility than the octopus regrowing limbs. But the number of datacenters with the hardware architecture needed to run big modern AI models is not actually that large (especially since I think there are some important differences in the architectures used by different companies).
And if you're talking about AI systems like Waymo or Roomba, those really are quite sensitive to the hardware they are implemented in.
I know all this and tend to focus on the meat and potatoes vs technical details when debating the creator of the paper in order to gain true ground.
The constraint you’re describing is compatibility, not fusion. The LLM requires specific architecture :: but it isn’t identical with that architecture.
The octopus doesn’t have compatibility requirements. It has no portability at all. That’s the distinction,
"Strange intelligence" is such a perfect framing. We keep trying to measure AI on a single axis from dumb to smart, but it's more like a completely different shape of cognition. Incredible at pattern matching across billions of data points, terrible at basic common sense. It's not less intelligent or more intelligent — it's differently intelligent. Great paper topic!
This is a strong and clarifying paper—I really like how it pushes back on the usual AGI narratives.
A few things stand out to me.First, the critique of the linear single-scale model feels spot on. Intelligence has always been messy and multidimensional, and your account captures that better.
Second, the familiar vs. strange distinction is sharp—it cuts through a lot of the anthropocentric bias in AGI talk and explains why superhuman and subhuman performances can sit side by side in one system.
Third, the caution about overreading adversarial failures or benchmark wins is important. A nonlinear view helps avoid big global claims from narrow evidence.
That said, I'd nudge the focus a bit. Even with multidimensional intelligence, the more pressing question to me shifts from measuring it to locating agency in practice.
Who actually sets the goals?
Who can step in and interrupt?
Who ends up accountable when things go wrong?
A system can be strangely intelligent without that being the main danger. The real issue hits when these capable but strange systems get embedded in decision loops where authorship spreads thin and accountability slips away.
So perhaps the next step beyond nonlinear intelligence is something like operational agency—how to keep responsibility and the ability to pull the plug intact as these things scale up.
Thanks for the kind words and the good point! Exploring operational agency does seem like a natural next step. If the intelligence is strange, we probably can't assume that agency is going to look like we're used to, and we'll need to think carefully about how to model and treat that.
This seems like a good discussion of an important point. In other contexts, I've heard the idea described as "jagged intelligence" - I suspect that when Andrej Karpathy uses that term, he's subconsciously assuming that the AI is objectively really good at one set of uses of information and objectively really bad at others, whereas I'd just want to say the AI is much better than humans at one set of skills and much worse than humans at the other, without either AI or humans being at an objectively "good" or "high" level of either.
I notice that you briefly mention one Legg and Hutter paper in discussing the idea of being successful across environments, but not their paper making a case for "universal intelligence": https://arxiv.org/abs/0712.3329
I think that paper is pretty influential, and makes the claim that there is an objective probability distribution over environments (given by the Solomonoff prior, though I don't recall if they use that term) such that we can measure the generality of an intelligence by its expected performance under that probability distribution. Even for people who aren't familiar with Legg and Hutter, lots of epistemologists have sympathies for this kind of objective probability distribution, and might try a similar move (though I think it's a bad move).
One other point - you mention optical illusions as examples of mistakes we make, but I've been persuaded by the Gigerenzer-type line that optical illusions are only "mistakes" in some contexts, but are actually shortcuts to the truth in others. My thought here is that this sort of situation will be even more obvious as we develop more types of AI - intelligences with biases that make them better at some types of problem will thereby do worse at others, and humans are optimized for one particular mix of problems, while other intelligences will end up having a different mix, with "fixes" that make them better at some also making them worse at others.
Good post! Much I agree with. But I think this is false, presuming functionalism about minds and intelligence:
"AI systems are highly unlikely to replicate every human capacity, due to limits in data and optimization, as well as a fundamentally different underlying architecture."
I also think it's false that AI systems are not evolving in the same Darwinian manner as every other replicator. Their environment is only humans and human built environment, as of now, but that's true for many other replicators too.
Here's Pete Mandik defending this against Keith Frankish, jump to 1:42:49
No I don't think they're likely to do that. I'm agnostic on that, leaning towards not likely. I meant the "due" part. I don't think those are going to be relevant constraints for very long.
Ah gotcha. I don't think the pressures are very different.
I agree with your main point.
It's a good episode of The Persuaders, highly recommended!
Eric,
I love the move away from the Linear Model.
regarding “Strange Intelligence”: Is it possible that what makes AI strange is that it is un-situated?
Biological intelligence is a fused scaffold :: it’s inseparable from the body that hosts it. The octopus is its tentacles.
But AI feels “strange” because it feels detachable.
It is a ghost that doesn't need a specific shell.
Perhaps “ Familiar Intelligence” is just intelligence that is trapped in a specific form, whereas “Strange Intelligence” is fluid, capable of being poured/ported into any server or sensor. It’s not just a different shape ..it has potential as a different state of matter.
That's an interesting way of thinking about it. But an alternative way of thinking about it is that AI *is* embodied, just in a radically different way -- distributed across space, with sensors and effectors reaching to our keyboards and computer screens.
Eric,
Lovely point on the distributed body.
But the distinction I’m pointing to is about adhesion.
Biology is hardware-locked.
The octopus can regrow a limb, but it cannot port its consciousness to a new vessel. The mind | the meat are the same event.
AI is substrate independent. It can be paused, copied, and rebooted on entirely different hardware.
That is the “ghost” quality::The software is a tenant; the biology is the building.
To be “un-situated” isn't to lack a body, but to be able to leave it.
> AI is substrate independent. It can be paused, copied, and rebooted on entirely different hardware.
This isn't exactly right - you can't just take an LLM that is designed for running on GPUs and run it on traditional CPU chips and hope for it to be anywhere near as effective at real-time problem solving.
It's true that you can take a particular model running on one datacenter and then upload it to a different datacenter with the same sort of chips, so that there's even more flexibility than the octopus regrowing limbs. But the number of datacenters with the hardware architecture needed to run big modern AI models is not actually that large (especially since I think there are some important differences in the architectures used by different companies).
And if you're talking about AI systems like Waymo or Roomba, those really are quite sensitive to the hardware they are implemented in.
Hi Professor Easwaran!
I know all this and tend to focus on the meat and potatoes vs technical details when debating the creator of the paper in order to gain true ground.
The constraint you’re describing is compatibility, not fusion. The LLM requires specific architecture :: but it isn’t identical with that architecture.
The octopus doesn’t have compatibility requirements. It has no portability at all. That’s the distinction,
thank you!
So I assume you don't take "upload" scenarios seriously?
Eric, Thank you for allowing me the proper time to respond.
Enjoy!
https://open.substack.com/pub/barnes7/p/a-letter-to-eric-schwitzgebel?r=72e2su&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
Eric,
you just asked a question you caused me to think about for the past three hours..
I will respond to this once I have gathered my thoughts so I can give it the interrogation it deserves.
"Strange intelligence" is such a perfect framing. We keep trying to measure AI on a single axis from dumb to smart, but it's more like a completely different shape of cognition. Incredible at pattern matching across billions of data points, terrible at basic common sense. It's not less intelligent or more intelligent — it's differently intelligent. Great paper topic!
This is a strong and clarifying paper—I really like how it pushes back on the usual AGI narratives.
A few things stand out to me.First, the critique of the linear single-scale model feels spot on. Intelligence has always been messy and multidimensional, and your account captures that better.
Second, the familiar vs. strange distinction is sharp—it cuts through a lot of the anthropocentric bias in AGI talk and explains why superhuman and subhuman performances can sit side by side in one system.
Third, the caution about overreading adversarial failures or benchmark wins is important. A nonlinear view helps avoid big global claims from narrow evidence.
That said, I'd nudge the focus a bit. Even with multidimensional intelligence, the more pressing question to me shifts from measuring it to locating agency in practice.
Who actually sets the goals?
Who can step in and interrupt?
Who ends up accountable when things go wrong?
A system can be strangely intelligent without that being the main danger. The real issue hits when these capable but strange systems get embedded in decision loops where authorship spreads thin and accountability slips away.
So perhaps the next step beyond nonlinear intelligence is something like operational agency—how to keep responsibility and the ability to pull the plug intact as these things scale up.
Thanks for the kind words and the good point! Exploring operational agency does seem like a natural next step. If the intelligence is strange, we probably can't assume that agency is going to look like we're used to, and we'll need to think carefully about how to model and treat that.
This seems like a good discussion of an important point. In other contexts, I've heard the idea described as "jagged intelligence" - I suspect that when Andrej Karpathy uses that term, he's subconsciously assuming that the AI is objectively really good at one set of uses of information and objectively really bad at others, whereas I'd just want to say the AI is much better than humans at one set of skills and much worse than humans at the other, without either AI or humans being at an objectively "good" or "high" level of either.
I notice that you briefly mention one Legg and Hutter paper in discussing the idea of being successful across environments, but not their paper making a case for "universal intelligence": https://arxiv.org/abs/0712.3329
I think that paper is pretty influential, and makes the claim that there is an objective probability distribution over environments (given by the Solomonoff prior, though I don't recall if they use that term) such that we can measure the generality of an intelligence by its expected performance under that probability distribution. Even for people who aren't familiar with Legg and Hutter, lots of epistemologists have sympathies for this kind of objective probability distribution, and might try a similar move (though I think it's a bad move).
One other point - you mention optical illusions as examples of mistakes we make, but I've been persuaded by the Gigerenzer-type line that optical illusions are only "mistakes" in some contexts, but are actually shortcuts to the truth in others. My thought here is that this sort of situation will be even more obvious as we develop more types of AI - intelligences with biases that make them better at some types of problem will thereby do worse at others, and humans are optimized for one particular mix of problems, while other intelligences will end up having a different mix, with "fixes" that make them better at some also making them worse at others.
Yes, that all makes sense to me, Kenny! Thanks for the insightful comments, as usual. I'll check out that Legg and Hutter paper.
Good post! Much I agree with. But I think this is false, presuming functionalism about minds and intelligence:
"AI systems are highly unlikely to replicate every human capacity, due to limits in data and optimization, as well as a fundamentally different underlying architecture."
I also think it's false that AI systems are not evolving in the same Darwinian manner as every other replicator. Their environment is only humans and human built environment, as of now, but that's true for many other replicators too.
Here's Pete Mandik defending this against Keith Frankish, jump to 1:42:49
https://www.youtube.com/live/149ZxHCLmB0?si=G6K-1N6jOS0NH08w
At
You think that AI systems *are* likely to replicate every human capacity? And you thinking maybe about the distant future?
On evolution and replication: Maybe, but our point requires only that the evolutionary pressures if they exist are very different.
Frankish and Mandik are always interesting!
No I don't think they're likely to do that. I'm agnostic on that, leaning towards not likely. I meant the "due" part. I don't think those are going to be relevant constraints for very long.
Ah gotcha. I don't think the pressures are very different.
I agree with your main point.
It's a good episode of The Persuaders, highly recommended!