top of page

a continuation of hilary putnam's "robots: Machines or artificially created life

With the rise of self-governing machines in the 1960s, questions of the possibility of robot consciousness have perplexed philosophers and paralleled our own understanding of human consciousness. The thought of sentient robots has acted as an outlet for unadulterated introspection into

the necessary conditions for consciousness as well as the recognition of consciousness in other things with minds (twms). In his paper “Robots: machines or artificially created life?”, Putnam engages with arguments that attempt to situate consciousness within the psychological or physical organization of

twms, and in turn attribute or dismiss consciousness in robots. Putnam concludes that no conditions establish or dismiss the possibility for consciousness in robots, however, our capacity to recognize consciousness in robots is predicated on the decision of accepting these beings as members of our

linguistic community (407). This paper will reconstruct Putnam’s arguments in light of his conclusion as well as include my own contentions, engagements, and contributions to the topic at hand.

Putnam first considers whether robots can be considered to be members of a linguistic community based on the role of psychological predicates in such a community where the robots have no awareness of their “physical make-up or how they came into existence” (387). Given a robot’s capacity for “inductive reasoning and theory construction”, he can learn to recognize that what he believed was red was not truly

red (387). This distinction between “physical reality and visual appearance” (388) speaks to Wittgenstein’s contention with private language where he places that members of linguistic communities must develop a standard for correct usage of words, and it is only when we’ve grasped these linguistic

criteria, which exist objectively by virtue of the agreed-upon conventions, that we may engage with the linguistic community. A robot’s capacity to discriminate between “red” and “not red” reflects a capacity to ascribe to criteria and thus be a member of a linguistic community. This capacity exists as a function “of the rules that govern its construction and use”, rules that are semantically nonequivalent to the rules that govern the physical correlate of “sensation red”, as Putnam names it “flip-flop 72” (389). If a robot were to discriminate between “red” and “non-red” based on the presence or absence of “flip-flop 72” an observer would not be able to recognize the robot as a member of his linguistic community in the same

sense as when the discrimination is a consequence of the learned criterion the robot holds and is aware of. This awareness, however, is contended by Bauer, as he argues that a fundamental disanalogy rises given that a human’s utterance “I have the sensation red” is a consequence of his self-awareness, as opposed to the robot’s which is a function of his mechanistic response to “flip-flop 72” (390). Putnam submits that

robots can be a model of any psychological model humans possess, and thus the sensation of red “does not uncontrollably trigger the utterance ‘it looks like there’s something red’” but rather a certain level of awareness of the sensation is necessary in order to report it (390). In order to be a member of a linguistic community, the robot must be aware of “the sensation red” and must hold it as a psychological attribute independent from physical attributes. Putnam argues that we can be aware of a sensation without having awareness of the brain state which correlates to it, which reflects a semantic difference between psychological and physical attributes (392). Given that these psychological attributes are represented by a totality of “states and sensory inputs and behavior”, and considering Fodor’s argument that psychological models may be identical in two entities with differing brain structures, psychological isomorphism between humans and robots can be assumed (392). Not only does this mean we can derive the validity of a robot’s membership to our linguistic community, or that robots can hold different physical structures and subscribe to the same linguistic community given psychological isomorphism, but also that we can’t dismiss a claim of consciousness in robots. On the other hand, we also cannot derive consciousness from a psychological model (394).

Putnam moves to engage with arguments attempting to dismiss consciousness in robots on the basis of determinism undermining the consciousness of robots. Firstly, Putnam responds to the argument of robots “replaying behavior” by clarifying that robots have the capacity to learn, and more than that, the capacity to learn things its programmer doesn’t necessarily know, maintaining psychological isomorphism to the programmer (396). This capacity to learn in addition to holding a psychological model means that even the programmer of the robot would be incapable of predicting his behavior.

Moreover, if a robot is reprogrammed to behave differently this does not undermine the libertarian free will entailed with psychological isomorphism if its capacity to “learn, develop, change” is maintained (397). Contentions on robots’ free will are contingent on the idea that humans have free will (397). If, as Putnam proposes, we found out humans are truly deterministic and a supreme creator anticipated every utterance we held, we too would not be conscious under this assumption (397). I extend, that just as this revelation would not negate our own self-perceived consciousness, it should not negate the robot’s self-perceived consciousness. I submit that this understanding of consciousness should reframe how we evaluate consciousness given that the more relevant frame of reference is one of inner ostension rather than external observation. Consciousness as we currently understand it exists independently from externalities. If a human, let’s call him Hue, were to have his memory erased and be put in a simulation where he is lead to live a life in a universe X where the rules of physics, societal conduct, and linguistic principles differed drastically from our own universe. After he dies in the simulation, he awakens in our own universe. It is imaginable, and some would argue probable, that Hue would believe that he was no

less conscious in the simulated reality than when he is in our own. Hue would also be capable of separating the illegitimacy of universe X from the legitimate self which is the one remnant from the constructed reality. The property of consciousness is one of an indivisible and fundamentally independent self, independent not only from externalities but also the beliefs and values which have guided his behavior in universe X. Consequently, a robot undergoing the same process can only be considered conscious if he holds a self-legitimized, as opposed to legitimized by the laws that govern universe x, and indivisible self which exists independently from the beliefs and values derived from his reality. The question of consciousness in the robot thus becomes a question of the necessary conditions for this “self”.

Putnam moves to contend with an identity theory’s claim that consciousness, being a “property of matter at a certain stage of organization” cannot develop in robots given the absence of “brain processes” (397). Putnam dismisses this claim on the basis that given the robot in question shares the same

psychological model as a human and that “psychological terms have exactly the same sense” for both entities, whether the brain processes are identical isn’t answered (398), and I supplement can’t be equated given the different physical organization of both entities. While Putnam utilizes this argument to suggest the sanctity of the semantic difference between psychological and physical mental states in considering the consciousness of robots, I find issue with Putnam’s argument. Even tho the robot is psychologically isomorphic in terms of his intentional states, Putnam cannot equate their qualitative experiences. If we recognize psychological states as holding inherent qualitative dimensions, under Leibniz’s principle, we cannot equate the robot’s psychological states to the human’s. Having the same stimulus doesn’t entail the same receptivity between two entities. Under the same logic, however, I couldn’t equate psychological states between humans either, which leads me to distinguish between having the same psychological model and sharing the same psychological states. Given that qualitative dimensions of psychological states differ amongst different species with different sensory inputs, it is foreseeable that the way a species develops linguistic frameworks is inspired and adaptive to the way the species hears, sees, or senses. If a species of primordial bats capable of speech evolved alongside, yet independently from, cavemen and somehow developed the same psychological model as humans, given the way they experience sound differs drastically in terms of the qualitative aspects of the sensation, their linguistic frameworks would develop differently. Consequently, an implication would be a necessary condition for the similarity of sensory inputs amongst members of a linguistic community. This predicates Putnam’s decision on the acceptance of robots as members of our linguistic community on their having similar sensory inputs to humans, otherwise, a community of robots would develop their own linguistic framework, leaving the question of consciousness of robots out of our reach.

Putnam then considers a theory which posits that ordinary language psychological terms are implicitly defined by a psychological theory (398). Given that the robot, Oscar as Putnam names him, shares a psychological model with humans, this theory questions whether a choice of acceptance into a linguistic community should be a contentious issue given that utterances reporting sensation are ruled by the same implicit psychological theory. While an implicit theory underpinning such utterances means we can infer behavior and consequently psychological organization in robots, Putnam contests that this would imply that notions such as “anger” are not only fixed, which would be “highly dubious”, but also theoretical (399). By conjoining the sentiment of “anger” to the term “anger” as a theoretical notion then sentiments such as “anger” may not really exist. Consequently, we may find ourselves in a situation where we hold a sensation such as anger which is negated by observers or dismissed as imaginative (399). However, as Putnam puts it, it is “obvious that ‘psychological terms in ordinary language’ have a reporting use” rather than being the object of observation themselves (399). Putnam recognizes that pain may be defined differently from one twm to another, and may I extend that it may be subjectively perceived amongst members of the same linguistic community, yet “the same concept of pain may be applicable” (401). However, it remains natural for us to accept someone’s claim that he is in pain (401). Whether or not we are referencing the exact same pain or not, we take it to mean the same thing based on

the condition that one’s discourse is not “linguistically deviant”. Our objective and subjective understandings of sentiments coexist and don’t interfere with our capacity to acknowledge utterances of sentiments others make, given that we judge these utterances by the criteria of the linguistic community

rather than our subjective understandings of our own sentiments. While we can’t draw conclusions on the internal states of members of a linguistic community simply by their subscription to the same criterions for these psychological terms, we can nevertheless treat others and converse with others as members of such a community, including a robot community.

Ziff’s argument for the necessary condition for being conscious is being “alive”, highlights a fundamental intuition in the discussion on robot consciousness to dismiss the claim of consciousness based on the physical makeup of a being (402). Ziff extends that what it means to be alive has to do with the physical structure of the entity, and consequently restricting whether or not Oscar can be conscious (402). Putnam illustrates the implications of this condition by arguing that if we constructed a sufficiently life-like android, with a psychological model programmed by a designer, it would be conscious as opposed to a machine with the same psychological model and completely deterministic physical-chemical system (403). Firstly, Putnam proposes that the more relevant marker for life in animals (or humans) is the “behavioral organization” defined by the psychological model of the animal (403). Ziff’s line of argument becomes circular in that what we know as being “alive” is predicated on consciousness (403) and yet we predicate consciousness on being alive (402). While I input that this loop suggests an inherent flaw in this reasoning, Ziff places that it is inherent in the meaning of “alive” that something “whose parts are all mechanisms” cannot be alive (403). This, however, carries with it dualist implication given that a

physicalist theory (where we, including our consciousness, are products of mechanisms) undermines what we understand to be “alive”. Putnam must then contend with the linguistic implications of what we understand to be “alive” given that we can equally imagine a robot “thinking, being power-mad, etc…” as we can’t imagine an entity comprised purely of mechanisms to be conscious. Thus, attempts at dismissing consciousness in robots based on linguistic arguments for necessary conditions of physical makeup in beings, fail.

Putnam’s analysis concludes that there are no sufficiently convincing empirical markers for consciousness be it in the psychological or physical makeup of a being (407). If we derive consciousness from the mind rather than the former dimensions, it is possible to recognize consciousness in robots just

as much as it is possible not to (404). This problem parallels the other-minds problem in that the issue of attributing consciousness to other humans, irrespective of psychological isomorphism, remains a problematic notion resolved either through “probability...by ‘the argument from analogy’” or through “dubious theories of meanings” (404). As Putnam proposes, if a child grows in a suit of armor and is deceived into thinking he is a robot, and we, in turn, are deceived into thinking so as well, stating that this entity is not conscious would not hold any truth value (405). The truth value, however, is much more accurately reported by the child rather than the limited observer. A robot philosopher can thus conclude that a second order robot, a robot manufactured by other robots, can logically be considered to have sensations, just as much as it is possible for him not to (406). Even with sufficient knowledge on the physical and psychological makeup of Oscar, the robot philosopher is still confronted with more unanswerable questions given it is not possible to ascribe or dismiss consciousness in Oscar based on his physical or psychological description (406). Consequently, Putnam recognizes the role of “robot language” to conjecture whether robots have sensations or not. The “decision” that Putnam posits as instrumental in the determining of consciousness in robots is whether robots should be treated as “fellow members of my linguistic community or as machines” (406). Our acceptance of these robots provides ground to circumvent the other-minds problem given our acknowledgment of sensation claims of fellow members of our linguistic community (406). Their treatment as machines, however, disallows us to count the robot as “conscious” or “alive, including the child in the suit of armor (406).

It is important to recognize that the question of consciousness is humans is not a question at all. If tomorrow a convention of credible scientists and philosophers were to publish a damning report “proving” with certainty that humans are not conscious, I doubt that the mass population would subscribe

to that notion rather than believing their own self-perceived consciousness. Just as I have argued previously, the status of consciousness is inherently independent from externalities and so the relevant frame of analysis for the necessary conditions for consciousness is introspective first and foremost. We

know mechanisms not to be conscious because they are indivisible from their mechanisms and thus lack the capacity to choose and direct their thoughts as well as live subjectively in an objective world. If we were to identify a characteristic of consciousness based on our linguistic inclination to call something conscious or not, it is the capacity to deliberatively and intentionally choose our thoughts and actions. If a human is genetically predisposed to produce significantly lower serotonin and perceive his surroundings as depressing, cognitive therapy is contingent on his capacity to distinguish between his sentiment, or physical predisposition, and the “self”, as well as derive legitimacy for his altered psychological mechanisms from that same “self”. To be conscious is to be a “self” that is capable of directing one’s thoughts. If a robot were to be physically predisposed to perceiving his surroundings as “sad”, first he must have the necessary qualitative affectivity to perceive “sad” as painful, and second, must be able to recognize himself as an independent “self” with the capacity to legitimize thoughts and changes in guiding principles. If we opt for an understanding of consciousness developing by causal mechanism rather than simply appearing in humans by virtue of a creator, we begin to recognize that consciousness may develop in degrees, degrees that are intimately linked with the extent of control and independence we recognize in the self. A natural implication, however, is that consciousness begins to more closely resemble a natural inclination in humans to separate the perceiver from the object of perception, a natural

inclination which is unsure to develop in robots, but can possibly take hold in a robot programmed to hold such an inclination and that is unaware of the origins of its creation, thus preserving the robot’s capacityto accord himself true independence and legitimacy of “self”.

 

 

 

Citations:

Putman, Hilary. “Robots: Machines or Artificially Created Life?” The Journal of Philosophy ,

vol. 61, no. 21, Dec. 1964, p. 668., doi:10.2307/2023045.

bottom of page