Home

CONSCIOUSNESS

In 'Methodological Solipsism Considered as a Research Strategy for Cognitive Psychology' (Rosenthal, 1991). Fodor outlines a theory of mind which is essentially independent of semantics, with regard to that aspect of cognition which can be 'scientifically' investigated. He seeks to clearly demonstrate that a formal syntactical arrangement of symbols is all that is necessary to explain propositional attitudes and intentionality and thus behaviour. Searle and Dennett outline a number of weaknesses with this argument and I will attempt to demonstrate that current neurobiogical knowledge in fact validates and rejects various element from all three positions but they can be synthesised to demonstrate that cognitive psychology and theories of mind do require more than just a computational approach.

Fodor notes that the typical features of mental states are type identified or individuated by only two characteristics or degrees of freedom ie representations and the relation between those representations. These relational states are really operations on the representations which alone have content. An example is 'Marvin eats a banana' with Marvin being Marvin not Tom, Dick or Harry and a banana being a banana not an apple representations having specific content and the relationship being uniquely identified by the relationship between the two which in this case is eating not desiring, buying etc. This is called the representational theory of mind and suggests this is a reasonable working hypothesis from which to proceed in further discussion.

There is a stronger view among cognitive psychologists, however, which suggests that mental processes are somewhat more than this and are in fact computational. He suggests that these processes are symbolic - over and above representations - and formal ie having syntactically acceptable arrangements. This syntax is a 'rough' syntax and cannot be easily described but he defines it as the 'formality' condition which exists independently of any semantic properties pertaining to particular representations.

In the representational theory of mind it is the content of representations which type identify individual mental states eg Mary is happy and Bill is paranoid contain representations of real people with specific identities ie having semantic content. In the computational theory of mind it is the formal relationship between symbolic representations which type identify individual mental states. These symbolic representations are independent of semantic content ie it doesn't matter whether they exist or not and it is the relationship between them which defines and individuates mental states.

Historically the field of psychology has been split along the lines of a rational psychology(formality condition) and natural psychology. The two defining characteristics of rational psychology are :

- mental states are type individuated only if they can be

differentiated through introspection and

- there is no difference between perception and hallucination and knowledge and belief. Natural psychologists believe that it is the organism/environment interaction that type identifies mental events.

If the computational perspective is taken ie the formal operational approach, it is possible to combine aspects from both points of view and view the mind in essence as a Turing machine which is processing or computing relations between representations from a tape, which corresponds to a memory store. This represents the rationalist view. The naturalist view with it's emphasis on organism/environment interaction is reflected by 'new' representations which are placed on the tape by so called 'oracles' who are symbolic of sensory transduction systems. In this situation mental states are only type identified by relations between symbols which are on the tape and are not dependent on the semantic nature of those representations. The key assumption being of course that while the symbols or representations may have had semantic validity when placed on the tape they are now independent of such.

It is this which suggests that a 'methodological solipsism' is the basis for cognitive psychology although Fodor is quick to point out that this is in fact methodological and not real solipsism(the world is nothing but what is in the mind). For Fodor the world exists but he is saying that from a psychological point of view the mind can operate as if it doesn't exist because the symbols and the relationship between symbols which type individuate mental states are independent of the semantic content of those symbols ie the 'world'. This is demonstrated by the indistinguishable nature of the statements 'I will do K iff P' and 'I will do K iff I believe P' from a psychological perspective.

Fodor suggests that this is leading to a weak sort of compromise between the two positions but that in reality only a computational psychology can be investigated productively. It is only this, he says, which leads to behavioural explanations, the province of cognitive psychology, via propositional attitudes and it is only the formality condition which enables propositional attitudes to 'exist'. A propositional attitude or intentionality is enabled by 'opacity' as differentiated from 'transparency' and opacity is defined by two conditions, a failure of existential generalisation and a failure of substitutivity of identicals. This in effect means that opacity applies where different symbols can represent the same semantic or transparent reality but are not interchangeable. His example is the Morning star and the Evening star which are opaque representations of the transparent reality of Venus.

Thus the psychological indistinguishability between what I know and what I believe together with the nonexistential nature of opaque mental states provides a convincing argument for a computational theory of mind which is solipsistic in nature. There is one flaw which Fodor notes however which is perceived with mental states involving the use of pronouns. In the case of Sam thinking "Sam feels sick" and Philip thinking "Sam feels sick" they are of the same opaque type but in the case of Sam thinking "I feel sick" and Philip thinking " He feels sick" they are still of the same opaque type and thus content is the same but symbolically they are not the same. In this case the opaque condition is met but it is not completely independent of semantics since the "He" and the "I" do have the same existential referent, namely Sam, which means that a solipsistic situation does not exist.

The key elements in this argument seem to be concerned with the representations on the 'tape' ie memory. The biology of memory and the different systems involved are slowly being revealed and there is a growing synthesis of the understanding of the neurobiological and cognitive aspects of memory. The actual memory mechanisms are as yet to be fully elucidated but there is growing agreement on many of the mechanisms involved in memory are classified as shown in Figure 1( Squire and Zola, 1991). This partitioning of memory function is along the lines of a primary distinction between declarative and non declarative memory.

 

 

Declarative memory consists basically of conscious memories, the sort of memory that is referred to as such by a lay person, and is absent in cases of amnesia. Non declarative memory consists of the nonconscious memories which are reflected in skills eg riding a bike, habits, priming and simple conditioning (Squire, 1992.) Priming refers to the increased efficiency with which an item is processed following the same item having been recently processed. People with amnesia still exhibit a full strength priming effect and can acquire motor, perceptual and cognitive skills at a normal rate(Squire, 1992).

Declarative memory is lost, ie amnesia occurs, when bilateral damage to the hippocampus occurs and, while previous memories can be recalled, no new memories can be laid down(Kandel et al,1991, ch 64). The hippocampus has many pathways to and from virtually all higher order associative areas of neocortex, both frontal and posterior. (Fair, 1992, p106). Thus longterm memory includes 'declarative' memory but is not only conscious memory as the priming effect and procedural skill learning demonstrate longterm memory which is unconscious.

However it is the cortical aspects of 'consciousness' that are the main aspect of differentiation along the lines of declarative and nondeclarative memory, as this distinction is also based on the conscious-unconscious distinction.

If we accept that longterm memory is consolidated in the cortex, although widely distributed(Cai, 1990), what needs to be ascertained is how declarative memory, which is consciously available on demand, is different from nondeclarative memory which is not consciously available on demand. In amnesia, resulting from damage to the hippocampi of the medial temporal lobes, retrograde memories(memories prior to the damage) and short term memories are unimpaired, but there is now a global deficit in new declarative memories. The hippocampus is in fact involved in the actual storage of declarative memories through the process of longterm potentiation(Cai, 1990). This is noted in many studies but a similar role for the hippocampus being involved in retrieval(Cai, 1990) must surely be ruled out as retrograde memories are still being recalled although the hippocampus is damaged.

As the hippocampus is not involved in retrieval and amnesiacs have unimpaired priming, skill learning and intelligence it suggests that nondeclarative longterm memory for priming and skill learning is stored without the hippocampus and that the hippocampus is involved in storage only of declarative memory .

'Declarative memory involves the processing of bits and pieces of information which the brain can then use to reconstruct past events. In a study of the nature of declarative memory researchers analysed remembered versions of stories. The versions that subjects recalled were shorter and more coherent than the original. The subjects were unaware that they were substituting and they often felt most certain about reconstructed parts. They were not confabulating, they were merely recalling in a way that interpreted the original material in way that made sense.' (Kandel et al, 1991, ch,64)

This indicates that what is happening is a conscious reconstruction of a scene or episode from relevant 'bits'. This is creativity and consistent with frontal lobe planning. The organisation of frontal-posterior cortical arrangements is organised in relation to the distal-proximal correspondence of sensory input and motor output regions of the cortex(Fair, 1991). That is as sensory information is received in the posterior cortex and becomes more distal from the input site it is connected with a frontal site more distal to the motor output site. Thus information from higher association areas of the cortex are connected to the highest association areas of the prefrontal cortex.

Similarly it would seem reasonable that the hippocampus, which can be made more efficient by the limbic system, reticular activating system and other stress factors which enhance the creation of declarative memories, is involved in transferring posterior sensory information to the prefrontal area as a basis for declarative memory. Fair suggests that this is merely an indexing system but the problem with this idea is, that as demonstrated by the cognitive study cited earlier, the recall of stories or events is not a truncated literal form. It is rather an altered form or transformation of the memory bits encoded in the parietal cortex indicating a total new encoding or engram not just voluntary reactivation of previously processed sensory input.

It could be concluded that consciousness does occur in the posterior cortex if sensory input of sufficient strength occurs and that retrieval of declarative memories, spatial and episodic, which are not caused by the hippocampus may in fact be an activation of the more distal prefrontal areas which do restore selected 'bits' from the posterior association areas and are activated by more proximal motor areas.

 

 

In a 1982 paper by Ungerleider and Mishkin, where they compiled results of electrophysiological, behavioural and anatomy studies, they proposed that there were two streams of visual processing.

-the dorsal stream from striate cortex to posterior parietal cortex.

-the ventral stream from striate cortex to the inferotemporal region.

They decided to classify there distinction of the two streams of processing on the output rather than the input streams and came up with two systems described as the 'how' versus 'what' for the dorsal parietal and the ventral inferotemporal streams respectively. These are related to

- automatic tasks requiring vision

- tasks of visual perception

The neuro anatomical structure related to these pathways is shown below in figure 3.

 

 

Damage to the inferotemporal cortex results in an inability to recognise common objects or even faces even though they can navigate through the world normally. Damage to the posterior parietal areas results in an inability to reach accurately for visually recognised targets which implies more than just an impairment of spatial ability (polymodal association cortex).

A patient who suffered from Balint’s syndrome, which involve disorders of visually guided reaching, spatial attention and gaze following damage to the posterior parietal cortex, showed marked deficits when it came to reaching and grasping tasks. The woman was asked to reach out and grasp with her thumb and index finger a number of different sized wooden blocks. The attempts were marked by the constant inability to create the correct aperture size corresponding to the blocks and yet when the task required recognising line drawings of various objects she had no difficulty.

They then cited a case study where a woman referred to as DF had suffered damage to Brodmans areas 18 and 19 while leaving 17 largely intact. This interrupted visual processing in the inferotemporal area but not the parietal cortex This patient was unable to describe either verbally or manually what the correct size of a number of blocks used in a similar grasping experiment or the angle of orientation of a slot into which she was required to insert a card. Although she could not describe either she could reach out and grasp the blocks with the same efficiency as normals and with normal ability to place cards in a slot.

They argued that the situation with DF suggests that awareness may not be connected with the dorsal system. They suggested that this is a mechanism for preventing interference with perceptual tasks of the ventral system where access to consciousness would appear to occur. If so this should also occur in normal subjects and this was demonstrated where subjects were required to track a moving light target. They monitored saccadic eye movements and found that if the target moved twice within a given time window the subjects were unable to note two movements perceptually or consciously although eye movements (associated with parietal cortex) had followed the target accurately in both movements.

They maintained that while it is feasible for consciousness to require the participation of the ventral stream of processing the prefrontal association area is probably the major component in conscious processing. This suggests that differences in any conscious operations may be dependent on anatomical location rather that specific nature of mind question This interpretation as to the location of consciousness cannot be correct either as I will demonstrate but rather reflects the location of conscious declarative memories which are conscious when being laid down and can also be recalled at a later stage.

The hippocampus which is located in the temporal lobe is certainly involved in transferring the ventral stream of visual processing into the prefrontal area It seems that there is almost certainly a similar dual processing stream for the processing of auditory information reflected by the way in which we process speech. We comprehend in the dorsal stream and extract words or phrases via the inferotemporal stream which are then stored in declarative memory.

Somatic information does not have this duality of processing as it enters the cortex in the anterior of the parietal cortex and it would have to pass through polymodal association areas prior to passing into the inferotemporal region. From introspection we are well aware that while we can summon up memories of painful situations in the auditory and visual mode we cannot imagine the experience of pain ie we have no declarative memory regarding pain. As we are also aware pain is a very real conscious experience however and thus we see that consciousness related to actual events occurs at posterior brain locations as presentations to the cortex and as re presentations which endure beyond their temporal semantic realities in the prefrontal cortex.

 

In 'Brain, Writing and Reading' (Rosenthal,1991) Dennett suggests that computational psychology as a theory of mind is dependent on our brains being a 'library of our thoughts and beliefs' encoded in a language of thought. If this is true it suggests that one neurologists will be able to read our minds by cracking the 'cerebral code'. This position, which is held to be the case with 'methodological solipsism' as a syntactical relation between formal symbols encoded on a 'tape', he thinks is untenable as a complete explanation of mind for a number of reasons.

He agrees that the brain must be an organ that represents but does it do so in 'sentences' ie syntactic relations between symbols?

He cites the case of Sam a reputable art critic who acts as if he believes his son who is a lousy painter is in fact a good painter . If the brain writing hypothesis is true Sam this would indicate that he, as indicated by his propositional attitude, actually believes his son is a good painter. Dennett says sometimes there is no explaining what a person believes and for reasons of loyalty or self deception he may be acting contrary to what he really believes. This is contrary to Fodor's position because his theory of mind is concerned only with the symbolic reality as expressed opaquely and cites this as causal of behaviour. This ignores the semantic reality and the beliefs which result from experience and are consistent with a person's biographical history.

A solution of course would be to have two different stores of representations, one that relates to our personal and conscious uses and another that operates behinds the scenes as it were. The number of beliefs that can be represented appears to be indefinite and suggests that a way to cope with this would to be have a core of explicit beliefs and we can derive other beliefs from this when required. He notes also that the syntax not only the vocabulary must be physically represented in our heads if a language of thought really exists and there are an infinite number of possibilities that could be represented eg 1<2, 1<3,1<4 and so on.

Is it possible to find all of our beliefs written in our heads or is it only judgements that are written in our heads? If they are written in our heads it should be possible to 'insert' or 'extract' beliefs. He concludes that it is not really a belief that would be expressed in brain writing but rather only a 'judgement' and a 'belief' would not be expressed in brain writing.

In 'Minds, Brains and Programs'(Rosenthal, 1991) Searle argues that computation alone cannot produce a mind contrary to the position of Fodor. He defines the difference between Strong artificial intelligence(AI) and weak AI as being the claim that in the case of strong AI it is argued that the appropriately programmed computer actually has cognitive states ie gives rise to a mind. Weak AI involves only the 'simulation' of a mental state and the difference is like the difference between a real cyclone and a simulated cyclone.

He takes the example of the AI scripts of Roger Schank and explains how given parameters of what normally happens in a restaurant ie a script a machine can logically answer questions such as 'Did the man eat the hamburger?'. Given other facts such as he ordered a hamburger, he paid money to the cashier and smiled at her. etc. the computer can answer this question correctly. This is a formal arrangement of symbols which give the correct answer but does it prove the computer has understanding or cognitive states.

Searle does not believe it does and sets out to demonstrate this with his 'Chinese room' story. This is a situation where Searle, who understands no Chinese, is in a room with only an input and an output window. He is passed in 1. a list of Chinese characters 2. another list of Chinese characters with a set of rules in English relating the second set of Chinese characters to the first set and 3. another list of Chinese characters and a list of characters that can be used as output with instructions in English on how to correlate the Chinese characters with those that had gone before. This would be similar to Schank's machine providing 1. A script in Chinese 2. a story in Chinese 3. questions and answer in Chinese and the instructions in English as to how the sets of Chinese symbols are related to one another between.

From this information it would be possible for Searle to receive input questions in Chinese and output answers in Chinese which would result in any observer believing Searle understood not only Chinese but the questions that were being asked and the appropriate response when in fact he understood nothing. He in fact is the 'program' and in the same way he argues that a machine understands nothing. In a similar way then he argues that intentionality cannot be explained by the formality condition and machines cannot think like human beings. Strong AI and functionalism he argues are both dependent on a 'program' that is independent of the machine which realises them and yet this is not the case with the human brain. Mental events are dependent on the human brain.

This is not to say that human beings do not process information in a computational way like Fodor suggests but rather that understanding does not result from computation and instead results from the neurobiological makeup of human beings. Perhaps chemical engineering in the future will enable a 'machine' to be built that does have a mind.

To conclude then I will synthesise the relevant arguments from the biological perspective. Fodor argues that the rationalist psychology is the only one open to investigation because of the symbolic nature of representations and the rough syntax involved in type individuating mental states. The evidence suggests he is dealing only with representations extracted from a larger store of 'semantic' representations and these are not symbolic but reproductions of parts of the original representations. This is deceptive in the case of verbal representations but in fact these are real auditory stimuli or squiggly black visual representations. As such the rearrangement of such representations in prefrontal areas is intentional but does not produce intentions.

Secondly consciousness which is associated with mind is not only found in the prefrontal area where syntactic arrangements of representations are possible but also in posterior areas associated with 'semantic' representations eg pain where computation is not possible. These together indicate that the mind cannot be viewed as acting from a solipsistic perspective only although it is possible to do so. This is what Dennet's art critic has done. His biographical history is encoded in his posterior cortex and he has extracted 'bits' and arranged them syntactically so that he 'thinks' his son is a great artist. This is his judgement but his belief is dependent on his total biography which is encoded in a location which makes it consciously unobtainable and unable to be manipulated but still affects behaviour and instigates desires. This meets his criterion for two different memory stores

The fact that amnesiacs with bilateral hippocampal(a biological coding device) destruction, which in normals obviously enables them to encode the indefinite number of possibilities Dennet speaks of, can no longer encode or compute in their prefrontal areas but still function normally in many respects indicate biological encoding of 'brain writing' is possible. This is because memories are widely distributed over millions of neurons and at the same time an individual neuron may be involved in a million different memories as a result of weighting patterns in neural networks. Of course the complexity is such that one could not extract a memory but nonetheless they are physically encoded.

Searle is of course correct in saying that computation alone cannot give rise to mind as demonstrated by his Chinese room argument. Computation is something that a brain can do however. He is also close to the mark when saying that the mind is dependent on the biological nature of the brain. I would say that the evidence suggests the brain has a biological capacity to produce a state that is consciously available to the mind and this is why computation is possible ie the environment can produce the required biological state in the posterior cortex and the mind can produce the required biological state in the prefrontal area independently of the environment(solipsism). The amnesiac who cannot compute obviously still has a mind it is only neurobiological damage that prevents him from computing via the formality condition.

This does not prevent the notion of dualism as Searle suggests but in fact enhances it if we say a 'spiritual' mind can experience the body as a uniform state but the brain can due to its nature reflect a contrasting state which is perceived as consciousness and intentionality proceeds from the mind which operates the machine to achieve the intended desires. If the machine is broken eg hippocampal damage the capacity to achieve those desires is limited. The fact of an independent mind which knows only the body including biographical brain inscriptions is reflected by conscience. In the case of Sam his mind 'knows' all that he has experienced unconsciously, as evidenced by priming etc., and as he is a good art critic he 'knows' that his son is lousy. What he proposes in the formality state is that his son is a good artist which is incoherent with what he 'knows' or as Dennet would say he believes. The proposition and the belief are located in physically distinct locations and yet known simultaneously and this demonstrates that the mind must be immaterial or non local and mental states are biological representations perceived as presented ie 'semantic' or rearranged as desired ie syntactically.

 

 

References

Cai, Z. The neural mechanism of declarative memory consolidation and retrieval: A hypothesis. Neuroscience and Biobehavioural Reviews, 1990 14(3) 295-304

Fair, CM. Cortical memory functions. Berlin 1992, Berkhauser Press.

Goodale, Melvyn A. and Milner, A. David.Separate Visual Pathways for Perception and Action. Trends in Neuroscience, Vol 15, No1, 1992.

Kandel, E.R., Schwartz,J.H. and Jessel, T.M. Principles of Neural Science. New York .1991 Elsevier Press 1991

Rosenthal, D.M. The Nature of Mind. Chapters 53, 54 and 55 . 1991 Oxford University Press. NY

Squire, L : Declarative and non-declarative memory: Multiple Brain Systems Supporting Learning and Memory. Journal of Cognitive Neuroscience 1992, 4, 232-243