Harris, Roy 1996. Signs, Language and Communication: Integrational and segregational approaches. London; New York: Routledge.
Modern theories of communication have been dominated by concepts derived from the assumption that the 'standard' communication situation involves at least two individuals, who may alternate in playing the roles of 'sender' and 'receiver'. The classic problem of psychocentric surrogationalism, identified by Locke (Chapter 9), arises from the fact that A cannot know what is going on in B's mind, nor B what is going on in A's, other than through signs of some kind. It is a problem generated essentially by this asymmetry of knowledge and by the associated difficulty of translating into the external, public domain (i.e. by way of signs) an experience (thought, emotion, etc.) of an essentially internal, private nature. Hence, supposedly, the need for a public system of signs equally accessible to both A and B. (Harris 1996: 167)
Other communication models that accommodate autocommunication (like that of Juri Lotman) on the other hand introduce the assumption of
asymmetry of knowledge because without it there would be little to communicate. If my knowledge is exactly the same as the receiver's, then what need is there for communication? There is already a pre-given communion. Thus Lotman (as well as Roman Jakobson) introduce the temporal dimension, so that between
me the sender and
I the receiver there is a temporal distance that allows for asymmetry. On the other hand, the autocommunication model does not assume that a
public system of system of signs is necessary. Charles Morris, for example, assumes that private signs are superstructures on public signs and consequently introduces the concept of
post-language symbols. I don't see why a person can't autocommunicate with signs that don't have a public origin. Albert Einstein reportedly conducted his thought-experiments with private visual signs that couldn't easily be translated into public linguistic signs.
Little if any of this applies to cases where communication involves only a single individual. For this very reason, self-communication is often regarded as a marginal or degenerate offshoot of interpersonal communication. But when communication processes are considered from an integrational perspective, there is a strong case for saying that this is yet another reversal of the lessons of experience. Far from taking priority over self-communication, interpersonal communication commonly depends on self-communication. (Harris 1996: 167)
We think alike. I've made the case several times that autocommunication precedes, accompanies and succeeds intercommunication. That is, whatever I have to say to another has a good chance of being thought about beforehand, when I'm talking to another I'm also listening to what I'm saying, and when I'm done communicating with another it's very likely that I'll replay the conversation in my mind in some measure, perhaps rehearsing what I'd say to the person next time we'll communicate.
Speech provides the most obvious illustration of this thesis. [...] The speaker is the first listener. And the importance of this aural feedback mechanism is amply demonstrated by the extreme difficulties in learning to talk encountered by those who are born profoundly deaf. What has to happen in such cases, in effect, is that other activities must be integrated into the process of self-communication to take the place of the missing aural activity. But where there is no such impairment, we learn to speak to others by a process which involves speaking also to orselves. And this speaking to ourselves is not a mere bonus, accident or redundancy, but an essential component in the larger enterprise. (Harris 1996: 167)
This has been evidenced by child language acquisition studies which demonstrate that even two-year-olds talk to themselves in their sleep and play with language in what Roman Jakobson calls metalinguistic operations, i.e. saying to yourself what one or another word means or might mean. Language, as a system of signs, can be thought of as a network, and by engaging in metalinguistic autocommunication we in fact gradually enlarge that network, integrate or incorporate new signs for future use.
Nor is the role of self-communication in speech limited to our monitoring the actual sounds we produce. It is often not until we hear ourselves say something that we realize that it is not quite what we wanted to say, that it 'sounds wrong', that it is irrelevant, impolite, potentially misleading, etc. Speech without the possibility of self-correction would be a quite different enterprise from that which we are familiar. For the speaker, self-correction presupposes self-communication. Otherwise there would be nothing to correct. (Harris 1996: 168)
This is also why writing is a very good method of self-communication. Not only can you re-read what you've written and evaluate it objectively, taking your time, and re-reading it several times over, you have the chance to self-correct by way of example, by rewriting your thoughts on the example of your previous writings. Instead of creating the sentence from scratch again you can incorporate what you judge to be good enough.
But speech is only one example of a whole category of cases where processes of self-communication are themselves integrated into processes of interpersonal communicaiton. We make no progress in learning to paint or to draw unless we are able and prepared to carry out that visual monitoring which alone allows us to recognize for ourselves what is not 'right' about this colour or that shading, or the relationship one between contour and another. Asking other people's opinions, getting our drawing teacher's assistance, etc., may help, but is not a substitute for visual self-correction. Again, self-correction implies self-communication. It is when self-communication intervenes that what may have begun as a doodle becomes a drawing. (Harris 1996: 168)
And again this seems to have been inspired by Charles Morris, who discusses self-communication (his term), among other things, in the artist's method of self-stimulation, continually responding to what s/he has already put on canvas.
In 1926 Piaget identified the phenomenon of (what he called) 'ego-centric speech'. This was based on his analysis of the classroom behaviour of six-year-old children. Ego-centric speech is described by Piaget as speech in which the childdoes not bother to know to whom he is speaking nor whether he is being listened to. He talks either for himself or for the pleasure of associating anyone who happens to be there with the activity of the moment. This talk is ego-centric, partly because the child speaks only about himeslf, but chiefly because he does not attempt to place himself at the point of view of his hearer. (Piaget 1959: 9)
Piaget's three categories of ego-centric speech are (i) repetition, (ii) monologue and (iii) 'dual or collective' monologue. In the first of these, the child repeats words and syllablesfor the pleasure of talking, with no thought of talking to anyone, nor even at times of saying words that will make sense. This is a remnant of baby prattle, obviously devoid of any social character. (Piaget 1959: 9)
In the case of monologue, says Piaget, 'the child talks to himsef as though he were thinking aloud', and does not address anyone else. The somewhat more controversial case of 'dual or collective' monologue is explained as one in whichan outsider is always associated with the action or thought of the moment, but is expected neither to attend nor to understand. The point of view of the other person is never taken into account; his presence serves only as a stimulus. (Piaget 1959: 9)
Piaget compares this to a similar phenomenon among adults - 'a certain type of drawing-room conversation where everyone talks about himself and no one listens'. (Harris 1996: 168-169)
The third type, the 'dual or collective' monologue, is most reminiscent of Malinowski's phatic communion, in which the speaker talks about his or her own personal "views and life history, to which the hearer listens under some restraint and with slightly veiled impatience, waiting till his own turn arrives to speak" (
Malinowski 1946[1923]: 314-315).
From an integrational point of view, there are two main objections to Piaget's classification. In the first place, despite the term 'ego-centric', what Piaget's three categories actually have in common is that they - apparently - serve no social purpose. In other words, ego-centric speech is defined negatively with respect to what Piaget calls 'socialized speech', and the latter is tacitly accepted as the norm. This immediately leads to an explanatory programme in which the features of 'ego-centric' communication are to be explained as defects or failures by comparison with the corresponding features of interpersonal communication. (Harris 1996: 169)
Here it could be argued that phatic communion does serve a social purpose, but that social purpose is self-referential or circular: phatic communion involves speaking for the sake of speaking. By doing so the speakers avoid silence, which in many cultures is considered threatening or at least alienating, and bond over the mere fact of speaking. The point of phatic communion is that people often bond due to speaking even when what they speak about is completely irrelevant, or, at least irrelevant for the other person.
The second objection is that because Piaget does not distinguish between the various ways in which his children's speech is integrated into the continuum of classroom activities, and in particular because he ignores the difference between what the integrationist would call communicational initiatives and communicational sequels, he ends up placing genuine examples of self-communication in the same category as examples of a very different kind.
For instance, he gives the following examples of ego-centric repetition (or 'echolalia'): (i) while a teacher is teaching one child the word celluloid, another child, engaged in drawing at another table, says 'luloud ... le le loid', (ii) while a group of children are looking at an aquarium, someone says the word triton, which is then repeated twice by a child who was paying no attention to the contents of the aquarium, (iii) a cuckoo clock strikes, and a child repeats after it 'coucou ... coucou', (iv) one child tells another that his pants are showing, and a third child in another part of the room immediately says, 'Look, my pants are showing and my shirt, too' (although in fact they are not), (v) one child hears another say the words 'a funny gentleman' and repeats them, although they have no relevance to what he is busy doing at the time, which is drawing a tramcar, and (vi) one child says 'I want to ride on the train up there' and this sentence is then repeated by another child.
There seem to be various possible motivations for some of these utterances other than merely mechanical 'echolalia'. But what the integrationist would point out is that there is a common integrational factor underlying them all. In each case the child produced a vocal sequel to a preceding auditory sign. Neither from the fact that the preceding sign was addressed to someone else (or, in the case of the clock, to no one in particular), nor from the fact that the child's attention was apparently focussed elsewhere at the time, does it follow that the child's sequel was self-addressed. On the contrary, all these cases could be interpreted as examples of a phenomenon related to what Malinowski called 'phatic communion'. That is to say, selective repetition of what is going on elsewhere in the classroom could be an elementary mechanism of participation. And participation is no more nor less than integration into the activities of others. (Harris 1996: 170)
This "integrationist" perspective sounds interesting. What I glean from this section is that it involves the integration of an individuals activities within the continuum of the activities engaged by others in the social situation. The echolalia aspect is also prevalent in adult dialogue when one interrupts another by taking up and repeating the last phrase the other just uttered.
The form of self-communication which has been most widely recognized and discussed in the Western tradition is one to which, paradoxically, the term communication is rarely applied. This is the self-communication we engage in when thinking. But, when recognized as communication, this is often construed as being simply a private counterpart of public (i.e. interpersonal) communication. (Harris 1996: 171)
I recall vividly how, when discussing Peirce's form of self-communication which concerns giving signs to oneself, a distinguished Italian colleague asked if I meant mental signs, i.e. thoughts, and answered yes, he scoffed and dismissed it as a worthless perspective.
Plato initiated this line of interpretation of thinking by referring to soul 'talking to itself'. It survives into the twentieth century in the behaviourist's dismissal of thought as a kind of speech with the sound turned off. Feats of mental arithmetic are seen as pencil-and-paper calculations, but without the pencil and paper. And although it is generally allowed that a composer may compose a tune 'in his head', this is often regarded as a kind of substitute for having a musical instrument to hand on which to compose it audibly. (It is agreed that Beethoven's deafness did not impar his ability to compose; but this is assumed to be because of his previous hearing experience. A Beethoven deaf from birth would be quite another phenomenon.) (Harris 1996: 171)
Similira problem prevails in relation with autocommunication, as in the case of the so-called "fantasy communication" wherein a person entertains a dialogue with another person who is not actually present. I would argue that the issue is much deeper than that. When I'm conversing with my "head-mates", it's not because I'm substituting an imaginary person for the real thing. Rather, it is because I've acquainted myself with the ideas of another thinker so thoroughly that these act as their own agents, so that I can contrast my own ideas to someone else's in a very intimate manner.
One curious result of this is that self-communication even becomes to be viewed as the ideal form of communication. B. F. Skinner in Verbal Behavior wrote: When a man talks to himself, aloud or silently, he is an excellent listener [...] He speaks the same language or languages and has had the same verbal and nonverbal experience as his listener. He is subject to the same deprivations and aversive stimulations, and these vary from day to day or from moment to moment in the same way. As listener he is ready for his own behavior as speaker at just the right time and is optimally prepared to "understand" what he has said. Very little time is lost in transmission and the behavior may acquire subtle dimensions. It is not surprising, then, that verbal self-stimulation has been regarded as possessing special properties and has even been identified with thinking. (Skinner 1957; Chapter 19)
It is perhaps difficult to believe nowadays that this was written in support of - and not as a reductio ad absurdum of - the thesis that it apparently advances. Skinner's theory of self-communication is rather like the grounds and generally we make the best bet, and see things more or less correctly. But the senses do not give us a picture of the world directly; rather they provide evidence for the checking of hypotheses about what lies before us. (Gregory 1977: 13)
The metaphor of 'evidence' here confirms that of 'interpretation'. The senses are 'witnesses', but not necessarily reliable ones. In short, this is the familiar Baconian semiology in updated guise, allegedly supported by the latest research from the experimental psychologist's laboratory. (Harris 1996: 173-174)
Argument for consideration.
It is not the quality of the research that the integrationist has any doubts about, but the quality of the semiology. In other words, it may well be true that my brain does many complicated neurological things in order to allow me to have even the most trivial visual experience, like seeing a boiled egg on the breakfast table in front of me. But it is a plain category mistake to suppose that what the brain does for me in this case involves or consists in the interpretation of signs. What I see on the breakfast table is not the sign of an egg; it is an egg, or rather, the visible part of it. At least, I hope it is. It could be of course that I have forgotten it is April 1st and I am staying with a relative who has children of school age, given to upholding honourable traditions. So when I try to tackle the object with my eggspoon it does turn out to be not an egg after all, but a kind of 'egg substitute', i.e. a ceramic or plastic imitation. But neither this nor the possibility of any other visual deception makes it plausible to regard my seeing it there on the table as a matter of interpreting signs. (Harris 1996: 174)
This is Austin's cheese on the table all over again. Harris, like Austin, does not seem to comprehend that an understanding of what an egg or a piece of cheese or any other food item is not "Natural", as the author put it previously, but based on knowledge gained from experience or discourse. In the case of egg it may very well suffice with experience, but with some types of cheeses it may depend upon prior discursive knowledge to identify a white, yellow, gray or green substance as belonging to the type "cheese". Likewise, the identification of an egg
as an egg, as opposed to imitations that are used for some tradition, the operation is semiosic - you consider what day it is, and depending on the certainty of sign you interpret the object in front of you as being or not being an actual edible egg. The "integrationist" should in my view rather welcome the idea that everyday objects are integrated into the temporal sequence of events.
Theorists reluctant to concede this make a great song and dance about the difference between 'vision' and 'perception'. There are some who would insist not only that any perceptual judgment involves the application of a concept or concepts, but that it also presupposes a theory (about the external world or some aspect of it). Thus we find arguments like the following.- Any perceptual judgment involves the application of concepts for example, a is F).
- Any concept is a node in a network of contrasted concepts, ad its meaning is fixed by its peculiar place within that network.
- Any network of concepts is a speculative assumption or theory: minimally as to the classes into which nature divides herself, and the major relations that hold between them.
Therefore, - Any perceptual judgment presupposes a theory.
Proposition 2 in the above is undiluted Saussure, masquerading as cognitive psychology. Proposition 3 is well-cured Bacon, and the butcher's job done without human intervention ('nature divides herself'). (Harris 1996: 174-175)
I quite like Harris's style. He is, for the most part, very clear, and in this case up front about his interpretation and criticism. I like this because it is much easier to proceed from this than from very slick writings that obfuscate the major points with pretty language. I would reply to both of these criticisms on the basis of what I've learned from E. R. Clay, a markedly non- (or rather pre-)Saussurean. Let's proceed with the egg example. (1) In order to judge the object on the table to be an egg, I need to have a conceptual remembrance of the general idea of an
egg. So far so clear. (2) Judging the object on the table thus obtains little more conscious knowledge than the name,
egg. But here's the tricky part: Clay holds that at the same time the judgment begets unconscious knowledge of the relations the object possibly be in, i.e. that it's probably something edible, but it may not be (it may be an imitation), that one comes from a chicken farm and the other from a plastic factory, what day it might be for it to more likely be an imitation, etc. That is, instead of a static (or even dynamic) network or system of concepts Clay introduces us to a version of Peircean sign-process where signs are not so much fixed beforehand with meanings but constantly evolving towards more certainty. And this is where it really gets interesting: (4) Clay also allows for a speculative assumption or
theory but calls it "thesic affection", i.e. "a kind of mental affection of which tendency-to-become-knowledge is the
differentia". When I look at the object on the table I have a thesic affection to know what it is. If I also have the "cognitive complement", i.e. "the knowledge needful to convert a
thesic affection into a knoweldge". Had I previous experience with eggs and egg imitations, the process is quick and painless, perhaps only the name of the object appearing to consciousness. But were I a child with no such experience, studying the object - perhaps tackling it with an eggspoon as I've seen parents do with similar objects - the experience would supply the relevant cognitive complement. Therefore, according to this interpretation of Clay, (4) Perceptual judgment not only presupposes knowledge, but oftentimes leads to a
thesis, i.e. a knowledge of an objective thing that is verbally expressible by a proposition (i.e. "This is an egg"). To drive the point home, consider this illustration by Clay: "Imagine yourself seeing at a distance a person who so affects your faculty of identification as to beget in you a faint opinion that he is your father, imagine that the opinion alternates for a time with the opposite opinion until, getting near to the object, you become certain that it is your father." (Clay 1882: 32). To go over the steps once more: (1) you see an object on the table that "so affects your faculty of identification as to beget in you a faint opinion" that the object is an egg; (2) this opinion alternates - you consider, whether consciously or unconsciously, between the possibilities - that whether it is an actual egg or an imitation egg, possibly taking the date into consideration; (3) and finally you become certain that it is indeed an egg. Peirceans will no doubt sense the progress from Firstness (here "faint opinion) to Secondness (here "alternating opinions" related to something other, such as the date) to Thirdness (here, "certainty" in the form of a thesis).
It is not worth arguing with 'cognitive' theorists of this persuasion, since all they would achieve if their persiasions were succeful is a pointless devaluation of terms like concept and theory. Fortunately, we do not need either psychologists or philosophers to tell us when we are dealing with signs and when we are not. Our senses do not have to have signs to interpret: they can get on perfectly well without them. (Harris 1996: 175)
But it does help to turn to psychologists and philosophers for authority. We don't have to devalue
concept and
theory if there are suitable alternatives out there. I agree that our senses do not have to have signs to interpret, but consciousness does, and it's difficult to imagine consciousness without the senses.
What confuses many people (and gives the Baconian semiologist the opportunity to put an oar in) is that we do often fasten upon some obvious or characteristic feature(s) of a complex object or event in order to identify it. (Harris 1996: 175)
E.g. Jakobson's "distinctive feature", but also Clay's "unitiveness".
This happens frequently in circumstances where a more careful investigation is either out of the question or considered unnecessary. But the fact that, for instance, I recognize a certain familiar smell and take it as a sign that someone is making coffee in the kitchen - and that this more often than not turns out to be correct - does not somehow provide proof of the philosophical thesis that coffee itself is an unknown substance of which all the perceived properties are merely 'signs' conveyed by our senses to our brain. For there is a huge gap between the inference which links the presence of the smell to the presence of the coffee and the doctrine that whatever we know about the external world is a sum total of sensory impressions, about which our cerebral cortex constructs a 'theory'. Nor does the fact that I would doubtless fail to recognize the object on the breakfast table as an egg unless I saw its shape, size, colour, etc. support the metaphysical contention that this combination of visual clues is a complex message in a code which, fortunately, my brain is equipped by Nature to decipher (having somehow previously acquired the right 'concepts'). (Harris 1996: 175-176)
The gap is artificial. It's like saying that there's a gap between me smoking a cigarette and the tobacco industry. It seems like Harris is obfuscating the relation between part and whole. Redintegration (smelling coffee and inferring that there is coffee nearby) is not unrelated to consciousness; it is just one part of the latter's operations. The part about "a complex message in a code" introduces needlessly structural concepts into an area where they have no place. That's just
kohatu.
There are indeed cases where my sensations become signs. Groping my way through a familiar room in the dark (because the lights have fused), what my fingers feel and my feet encounter become signs of chairs, tables, walls, doors, etc. There is no semiological mystery here. These sensations become signs because - and insofar as - they integrate past memories with a current programme of action - i.e. crossing the room in the dark. What Bacon and his intellectual heirs seem to suppose is that all sense perception is a matter of groping one's way in the dark through a physical universe otherwise unknown and unknowable. (Harris 1996: 176)
Yes, exactly. That is an awesome metaphor. The thing is, this conflict can be surpassed by introducing a gradient, a gradual development from light to dark. It's not just black or white - clearly visible and groping in the dark - integrating past memories with current programmes of action occurs every step of the way, it is merely more noticeable, more foregrounded, in groping through the dark. I don't have to consider the relations between furtniture in the light consciously as much because it's easy, but I do consider them, unconsciously. In the dark this process still occurs but because there is more resistance, more unfamiliarity, more strangeness and unpredictability, I have to make more of a conscious effort to achieve the same result. In my small and open-spaced room I don't even have to grope around that much in the dark because it's easy to navigate; but in a completely new setting I'm groping and stumbling about in the daylight. It's not that the universe is absolutely unknown and unknowable outside of my comfort zone; it's that there are degrees of distinctness and indistinctness to everything.
But is not the misting of the glass a natural phenomenon? Yes. But so is the expansion of mercury in the thermometer. the question is not whether these effects arise from natural causes, but how they are integrated into a circumstantially relevant generation of signs. When I dip my toe in the water I simply feel something which is wet and hot, or cold, or lukewarm, etc.; but these sensations and my judgments as to the wetness, heat, etc. require no intermediating sign. Nor do they have any semiological function at all until I treat them as signs for purposes of some further activity.
Is there any intermediating sign involved when I hear the sound of my own voice? In the auditory sensation, no. But having an auditory sensation is not monitoring. The whole difference is that monitoring implies circumstantially relevant criteria of judgment. So it is with the temperature of the water. That my toe feels something hot, cold, etc. is not a sign. But if I judge from the sensation that the temperature of the water is right or not right, that it is an indication of the temperature of the larger body of water of which my toe made contact with only a small part etc., then I take that sensation as a sign. 'Right' means in relation to some activity projected as part of a potentially integrated sequel. In checking the temperature with my toe, I construct an assimilative sequel, which may, in the right circumstances, be integrated with an enactive sequel (e.g. taking a bath, going for a swim, or deciding not to). (Harris 1996: 177)
But what about the enactive sequence involves another person asking me how warm the water is and I, perhaps due to already having my shoes off, dip my toe in the water and tell him or her how warm it is? The sequence comes to a close with that, for the next step is unknowable - perhaps we are considering to go for a swim, but maybe we have nothing of the kind in mind and are just passing time?This integrative approach seems too heavily oriented towards action, i.e. energetic interpretants. It doesn't seem to reach Thirdness yet.
Just as self-communication is not interpersonal communication restricted to one person, nor is interpersonal communication self-communication shared. (Harris 1996: 180)
I would argue that both cases can be. I may write a letter to another person but not send it, and return to it myself at another time when my person has changed and I am another to my previous self who wrote the lettel. Or I may write a note for myself but choose, for whatever purpose, to share it with others. In these cases the
intended addressee is the dominant feature.
This is eminently clear from the motivation he attributes to Crusoe at this point in the narrative. The castaway is not concerned about leaving any records for others. He merely hopes, by setting out his condition in what he calls the 'debtor and creditor' format, to 'deliver my thoughts from daily poring upon them'. (Harris 1996: 184)
This remark concerns the curious phenomena of "getting leave" from obsessive thoughts once they are written down.