FANDOM


Machine functionalism is one theory of how conscious mental states arise and is thus relevant to the question of what sorts of entities can suffer. This page records some notes on the topic, focusing on objections to the functionalist position and possible responses.

Machine Functionalism and Computer SimulationsEdit

Hilary Putnam, the father of machine functionalism, propounded the view that "being in pain is a functional state of the organism," where "all organisms capable of being pain are Probabilistic Automata" [1]. Putnam himself "require[d] that the Machine possess 'pain sensors,' i.e., sensory organs which normally signal damage to the Machine's body", but many modern thinkers like Ray Kurzweil tend to imagine that a mind can arise from mere software implementing the right sorts of inputs and outputs. Rather than building an entire corporal robot to instantiate artificial consciousness, the software proposal would be far cheaper, allowing vastly larger numbers of minds to be run. (This view is perhaps closer to computationalism than ordinary machine functionalism.)

Would the software-only minds require interaction with a virtual environment? Some psychologists, such as [2], maintain that conscious experience only arises out of an interaction of the mind with a physical body. If that's true, then at least a VR world would seem necessary. But compared with the complexity of simulating a human-level mind, a VR world would probably be cheap, because only rough approximations of most of its aspects would need to be calculated in detail.

Implications of Machine FunctionalismEdit

The importance of machine functionalism mirrors that of the hard problem of consciousness in general. For instance, if broad machine functionalism is false, it potentially narrows the set of entities we believe can feel conscious experiences. For instance, if a simulated experience can't run entirely in software but requires some (biological or at least physically tangible) body to receive the inputs and outputs in the right way, then the number of virtual-reality minds that could be run would be presumably much smaller, thus removing some force from the simulation argument. This would also imply that utilitronium may be more costly to create than we might hope.

Objection: The Realization ProblemEdit

Section 3.2[3] (p. 237), reviews an objection he calls the "Realization Problem". In particular, Shagrir cites the proof in [4] "that every ordinary open system is a realization of every abstract finite automaton." Shagrir continues:

Differently put, if a functional organization of a certain complexity is suficient for having a mind, as the functionalist claims, then the rock, too, should be deemed to have a mind. In fact, almost everything, given that it realizes this automaton, has a mind. Moreover, if Putnam’s theorem is true, then my brain simultaneously implements infinitely many different functional organizations, each constituting a different mind. It thus seems that I should simultaneously be endowed with an infinite number of minds!

ReferecesEdit

  1. Hilary Putnam, "The nature of mental states," 1967.
  2. Patrick D. Wall, Pain: The Science of Suffering, 2000.
  3. Shagrir, Oron (2005). The rise and fall of computational functionalism. In Yemima Ben-Menahem (ed.), [Hilary Putnam (Contemporary Philosophy in Focus). Cambridge University Press.
  4. Hilary Putnam, Representation and Reality, 1988.

Template:Consciousness

Ad blocker interference detected!


Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.