I agree with the idea that consciousness won’t emerge gradually in the same way it did in animals. But I don’t think the idea presented here of consciousness as an “add-on” is necessarily right. I’ll suggest two alternatives: (1) Consciousness is an artifact of the imperfections in our mental architecture. I don’t know that Dennett puts it quite that way, but this seems like a reasonable interpretation of Dennett’s “box of tricks” description of how consciousness comes about. (2) Consciousness emerges from desires rather than cognition: awareness of self is caused by a constant monitoring of how distant the self is from some objective, in order to better achieve that objective.
The implications of these two alternate views might be: machines never become conscious because they lack our computational imperfections; machines never become conscious because they are never created with goals, only programming; machines become situationally conscious when their programming gives them an objective; machine consciousness exists but is fundamentally different from ours, because the imperfections in their computational architecture are very different; etc.
There are different schools of thought as you well know. What is consciousness? Is it only manifested in the much ado about nothing of human existence? You can make a pretty good case that the frog is more in tune with its consciousness never needing to think of its existence then we will ever be.
Machine sentience will come with thunder and as a whimper and everything in between and ways we cannot conceive of with our limited perception of consciousness.
What a great subject for endless thought exercise.
I agree with the idea that consciousness won’t emerge gradually in the same way it did in animals. But I don’t think the idea presented here of consciousness as an “add-on” is necessarily right. I’ll suggest two alternatives: (1) Consciousness is an artifact of the imperfections in our mental architecture. I don’t know that Dennett puts it quite that way, but this seems like a reasonable interpretation of Dennett’s “box of tricks” description of how consciousness comes about. (2) Consciousness emerges from desires rather than cognition: awareness of self is caused by a constant monitoring of how distant the self is from some objective, in order to better achieve that objective.
The implications of these two alternate views might be: machines never become conscious because they lack our computational imperfections; machines never become conscious because they are never created with goals, only programming; machines become situationally conscious when their programming gives them an objective; machine consciousness exists but is fundamentally different from ours, because the imperfections in their computational architecture are very different; etc.
Right, these seem like live alternatives. I think we have a long way to go in understanding consciousness!
There are different schools of thought as you well know. What is consciousness? Is it only manifested in the much ado about nothing of human existence? You can make a pretty good case that the frog is more in tune with its consciousness never needing to think of its existence then we will ever be.
Machine sentience will come with thunder and as a whimper and everything in between and ways we cannot conceive of with our limited perception of consciousness.
What a great subject for endless thought exercise.