Chalmers, A. F. 1999. What is this thing called science?. Third edition. Indianapolis; Cambridge: Hackett Publishing Company, Inc.
But since I have no time for obscurantist nonsense about the incommensurability of frameworks (here Popperians prick up their ears), the extent to which I have been forced to acknowledge and counter the views of my Sydney colleagues and adversaries has led me to understand the strengths of their views and the weaknesses of my own. (Chalmers 1999: xii)
I, on the other hand, eat obscurantist nonsense for breakfast, lunch, and dinner.
Not all of this can be blamed on Louis Althusser, whose views were very much in vogue at the time of writing, and whose influence can still be discerned to some extent in this new edition. I have learnt my lesson and in future will be very wary of being unduly influenced by the latest Paris fashion. (Chalmers 1999: xiv)
Very characteristic: take up French theory and then regret it.
It is a reaction somewhat like this that led the philosopher Paul Feyerabend (1975) to write a book with the title Against Method: Outline of an Anarchistic Theory of Knowledge. According to the most extreme view that has been read into Feyerabend's later writings, science has no special features that render it intrinsically superior to other kinds of knowledge such as ancient myths or voodoo. A [|] high regard for science is seen as a modern religion, playing a similar role to that played by Christianity in Europe in earlier eras. It is suggested that the choices between scientific theories boils down to choices determined by the subjective values and wishes of individuals. (Chalmers 1999: xxi-xxii)
Blasphemy!
The British empiricists of the seventeenth and eighteenth centuries, notably John Locke, George Berkeley and David Hume, held that all knowledge should be derived from ideas implanted in the mind by way of sense perception. The positivists had a somewhat broader and less psychologically oriented view of what facts amount to, but shared the view of the empiricists that knowledge should be derived from the facts of experience. The logical positivists, a school of philosophy that originated in Vienna in the 1920s, took up the positivism that had been introduced by Auguste Comte in the nineteenth century and attempted to formalise it, paying close attention to the logical form of the relationship between scientific knowledge and the facts. Empiricism and positivism share the common view that scientific knowledge should in some way be derived from the facts arrived at by observation. (Chalmers 1999: 3)
A familiar crowd, though I know only Locke more closely. Hopefully I'll get around to reading the others, too, at some point not too distant in the future.
"What", it might well be suggested, "have these contrived examples got to do with science?" In response, it is not difficult to produce examples from the practice of science that illustrate the same point, namely, that what observers see, the subjective experiences that they undergo, when viewing an object or scene is not determined solely by the images on their retinas but depends also on the experience, knowledge and expectations of the observer. The point is implicit in the uncontroversial realisation that one has to learn to be a competent observer in science. Anyone who has been through the experience of having to learn to see through a microscope will need no convincing of this. When the beginner looks at a slide prepared by an instructor through a microscope it is rare that the appropriate cell structures can be discerned, even though the instructor has no difficulty discerning them when looking at the same slide through the same microscope. (Chalmers 1999: 7)
The point being that sense and understanding are distinct faculties.
How can we establish significant facts about the world through observation if we do not have some guidance as to what kind of knowledge we are seeking or what problems we are trying to solve? In order to make observations that might make a significant contribution to botany, I need to know much botany to start with. What is more, the very idea that the adequacy of our scientific knowledge should be tested against the observable facts would make no sense if, in proper science, the relevant facts must always precede the knowledge that might be supported by them. Our search for relevant facts needs to be guided by our current state of knowledge, which tells us, for example, that measuring the ozone concentration at various locations in the atmosphere yields relevant facts, whereas measuring the average hair length of the youths in Sydney does not. (Chalmers 1999: 13)
Innovation requires familiarization with tradition.
One point that should be noted in that what is needed in science is not just facts but relevant facts. The vast majority of facts that can be established by observation, such as the number of books in my office or the colour of my neighbour's car, are totally irrelevant for science, and scientists would be wasting their time collecting them. Which facts are relevant and which are not relevant to a science will be relative to the current state of development of that science. Science poses the questions, and ideally observation can provide an answer. This is part of the answer to the question of what constitutes a relevant fact for science. (Chalmers 1999: 27)
"There must be a real and living doubt, and without this all discussion is idle" (CP 5.375; in Meyers 1967: 13).
But a point that needs to be stressed here is that logical deduction alone cannot establish the truth of factual statements of the kind figuring in our examples. All that logic can offec in this connection is that if the premises are true and the argument is valid then the conclusion must be true. But whether the premises are true or not is not a question that can be settled by an appeal to logic. An argument can be perfectly valid deduction even if it involves a false premise. (Chalmers 1999: 43)
"[...] logic does not consider how an object or idea may be presented but only how it may be represented; eyesight, that is to say, and inspiration are both beyond the province of logic" (W 1: 163).
There is a strong sense, then, in which logic alone is not a source of new truths. The truth of the factual statements that constitute the premises of arguments cannot be established by appeal to logic. Logic can simply reveal what follows from, or what in a sense is already contained in, the statements we already have to hand. Against this limitation we have the great strength of logic, namely, its truth-preserving character. If we can be sure our premises are true then we can be equally sure that everything we logically derive from them will also be true. (Chalmers 1999: 43)
"The terms of every proposition are presupposed to be comprehended; therefore no proposition can give us a new conception, and Wisdom is not learnt from books" (W 1: 5).
Let us consider some low-level scientific laws such as "metals expand when heated" or "acids turn litmus red". These are general statements. They are examples of what philosophers refer to as universal statements. They refer to all events of a particular kind, all instances of metals being heated and all instances of litmus being immersed in acid. Scientific knowledge invariably involves general statements of this kind. The situation is quite otherwise when it comes to the observation statements that constitute the facts that provide the evidence for general scientific laws. Those observable facts or experimental results are specific claims about a state of affairs that obtains at a particular time. They are what philosophers call singular statements. They include statements such as "the length of the copper bar increased when immersed in the beaker of hydrochloric acid". (Chalmers 1999: 44)
Not a bad tidbit about the language of science.
There are many instances in which the demand for a large number of instances seems inappropriate. To illustrate this, consider the strong public reaction against nuclear warfare that was provoked by the dropping of the first atomic bomb on Hiroshima towards the end of the Second World War. That reaction was based on the realisation of the extent to which atomic bombs cause widespread destruction and human suffering. And yet this widespread, and surely reasonable, belief was based on just one dramatic observation. In similar vein, it would be a very stubborn investigator who insisted on putting his hand in the fire many times before concluding that fire burns. (Chalmers 1999: 46)
Week 8 of hand-in-fire experiments continues, dataset still incomplete.
If we take contemporary scientific knowledge at anything like face value, then it has to be admitted that much of that knowledge refers to the unobservable. It refers to such things as protons and electrons, genes and DNA molecules and so on. How can such knowledge be accommodated into the inductivist position? Insofar as inductive reasoning involves some kind of generalisation from observable facts, it would appear that such reasoning is not capable of yielding knowledge of the unobservable. Any generalisation from facts about the observable world can yield nothing other than generalisations about the observable world. Consequently, scientific knowledge of the unobservable world can never be established by the kind of inductive reasoning we have discussed. (Chalmers 1999: 49)
Eerily reminiscent of the ancient Greek atomistic theories discussed in the course on the history of philosophy: atoms, after all, cannot be sensed.
More seriously, we have been unable to give a precise specification of induction in a way that will help distinguish a justifiable generalisation from the facts from a hasty or rash one, a formidable task given nature's capacity to surprise, epitomised in the discovery that supercooled liquids can flow uphill. (Chalmers 1999: 58)
Likewise "We should be astonished and should seek for an explanation if, for instance, we saw the flame turn about and point "down."" (Koyré 1943: 407).
Popper himself tells the story of how he became disenchanted with the idea that science is special because it can be derived from the facts, the more facts the better. He became suspicious of the way in which he saw Freudians and Marxists supporting their theories by interpreting a wide range of instances, of human behaviour or historical change respectively, in terms of their theory and claiming them to be supported on this account. It seemed to Popper that these theories could never go wrong because they were sufficiently flexible to accommodate any instances of human behaviour or historical change as compatible with their theory. Consequently, although giving the appearance of being powerful theories confirmed by a wide range of facts, they could in fact explain nothing because they could rule out nothing. (Chalmers 1999: 59)
Probably why Psychoanalysis and Marxism are today not considered "sciences" but interpretive frameworks - you can interpret anything through these respective prisms.
Once proposed, speculative theories are to be rigorously and ruthlessly tested by observation and experiment. Theories that fail to stand up to observational and experimental tests must be eliminated and replaced by further speculative conjectures. Science progresses by trial and error, by conjectures and refutations. Only the fittest theories survive. (Chalmers 1999: 60)
If only it were like this in the humanities, where half-baked theories linger on long past their expiration.
The sophisticated falsificationist account of science, with its emphasis on the growth of science, switches the focus of attention from the merits of a single theory to the relative merits of competing theories. It gives a dynamic picture of science rather than the static account of the most naive falsificationists. Instead of asking of a theory, "Is it falsifiable?", "How falsifiable is it?" and "Has it been falsified?", it becomes more appropriate to ask, "Is this newly proposed theory a viable replacement for the one it challenges?" In general, a newly proposed theory will be acceptable as worthy of the consideration of scientists if it is more falsifiable than its rivale, and especially if it predicts a new kind of phenomenon not touched on by its rival. (Chalmers 1999: 74)
The language of static/dynamic quite relevant for my current interests.
Having carefully observed the moon through his newly invented telescope, Galileo was able to report that the moon was not a smooth sphere but that its surface abounded in mountains and craters. His Aristotelian adversary had to admit that things did appear that way when he repeated the observations for himself. But the observations threatened a notion fundamental for many Aristotelians, namely that all celestial bodies are perfect spheres. Galileo's rival defended his theory in the face of the apparent falsification in a way that was blatantly ad hoc. He suggested that there was an invisible substance on the moon filling the craters and covering the mountains in such a way that the moon's shape was perfectly spherical. When Galileo inquired how the presence of the invisible substance might be detected, the reply was that there was no way in which it could be detected. (Chalmers 1999: 76)
Amusing indeed. Epoxy resin moon.
The falsification of caution conjectures is informative because it establishes that what was regarded as unproblematically true is in fact false. Russell's demonstration that naive set theory, which was based on what appear to be almost self-evident propositions, is inconsistent is an example of an informative falsification of a conjecture apparently free from risk. By contrast, little is learnt from the falsification of a bold conjecture or the confirmation of a cautious conjecture. If a bold conjecture is falsified, then all that is learnt is that yet another crazy idea has been proved wrong. The falsification of Kepler's speculation that the spacing of the planetary orbits could be explained by reference to Plato's five regular solids does not mark one of the significant landmarks in the progress of physics. (Chalmers 1999: 80)
Hence why a suggestion read from an awful book about how to conduct social science experiments was so off-putting: the suggestion was to take a well-known and proved experiment and add a random extra factor that makes it sufficiently different to make it sound like a new discovery.
A confirmation will confer some high degree of merit on a theory if that confirmation resulted from the testing of a novel prediction. That is, a confirmation will be significant if it is established that it is unlikely to eventuate in the light of the background knowledge of the time. Confirmations that are foregone conclusions are insignificant. If today I confirm Newton's theory by dropping a stone to the ground, I contribute nothing of value to science. (Chalmers 1999: 84)
Why putting together two humanistic theories that are already taken to be commensurate is not a great leap forward.
The main attraction of the Copernican theory lay in the neat way it explained a number of features of planetary motion, which could be explained in the rival Ptolemaic theory only in an unattractive, artificial way. The features are the retrograde motion of the planets and the fact that, unlike the other planets, Mercury and Venus always remain in the proximity of the sun. (Chalmers 1999: 96)
Mercury is 57.91 million km, Venus 108.2 million km, and Earth 149.6 million km from the sun.
Definitions must be rejected as a fundamental way of establishing meanings because concepts can only be defined in terms of other concepts, the meanings of which are given. If the meanings of these latter concepts are themselves established by definition, it is clear that an infinite regress will return unless the meanings of some concepts are known by other means. A dictionary is useless unless we already know the meanings of many words. Newton could not define mass or force in terms of previously available concepts. It was necessary for him to transcend the limits of the old conceptual framework by developing a new one. A second alternative is the suggestion that concepts acquire their meaning by way of ostensive definition. (Chalmers 1999: 105)
This line of argumentation is becoming more and more familiar. No new knowledge can be learned from books because understanding them requires previous familiarity with other books.
A case could be made to the effect that the typical history of a concept, whether it be "chemical element", "atom", "the unconscious" or whatever, involves the initial emergence of the concept as a vague idea, followed by its gradual clarification as the theory in which it plays a part takes a more precise and coherent form. (Chalmers 1999: 106)
The growth of signs.
It is the lack of disagreement over fundamentals that distinguishes mature, normal science from the relatively disorganised activity of immature pre-science. According to Kuhn, the latter is characterised by total disagreement and constant debate over fundamentals, so much so that [|] it is impossible to get down to detailed, esoteric work. There will be almost as many theories as there are workers in the field and each theoretician will be obliged to start afresh and justify his or her own particular approach. (Chalmers 1999: 110-111)
A situation all too familiar.
The seriousness of a crisis deepens when a rival paradigm makes its appearance. According to Kuhn (1970a, p. 91), "the new paradigm, or a sufficient hint to permit later articulation, emerges all at once, sometimes in the middle of the night, in the mind of a man deeply immersed in crisis". The new paradigm will be very different from and incompatible with the old one. The radical differences will be of a variety of kinds. (Chalmers 1999: 114)
How Jakobson's scheme of linguistic functions should be schematized visually literally came to me in a dream after reading him extensively.
The way scientists view a particular aspect of the world will be guided by a paradigm in which they are working. Kuhn argues that there is a sense in which proponents of rival paradigms are "living in different worlds". He cites as evidence the fact that changes in the heavens were first noted, recorded and discussed by Western astronomers after the proposal of a Copernican theory. Before that, the Aristotelian paradigm had dictated that there could be no change in the super-lunar region and, accordingly, no change was observed. Those changes that were noticed were explained away as disturbances in the upper atmosphere. (Chalmers 1999: 115)
Human Umwelten. Maailmamõistmine.
If the revolution is to be successful, this shift will spread so as to include the majority of the relevant scientific community, leaving only a few dissenters. These will be excluded from the new scientific community and will perhaps takes refuge in a philosophy department. In any case, they will eventually die. (Chalmers 1999: 117)
K.
Lakatos's response was to suggest that not all parts of a science are on a par. Some laws or principles are more basic than others. Indeed, some are so fundamental as to come close to being the defining feature of a science. As such, they are not to be blamed for any apparent failure. Rather, the blame is to be placed on the less fundamental components. A science can then be seen as the programmatic development of the implications of the fundamental principles. Scientists can seek to solve problems by modifying the more peripheral assumptions as they see fit. Insofar as their efforts are successful they will be contributing to the development of the same research program however different their attempts to tinker with the peripheral assumptions might be. (Chalmers 1999: 131)
There is internal differentiation within a theory or paradigm.
Lakatos referred to the fundamental principles as the hard core of a research program. The hard core is, more than anything else, the defining characteristic of a program. It takes the form of some very general hypotheses that form the basis from which the program is to develop. (Chalmers 1999: 131)
E.g. the concept of sign in semiotics.
We have seen that that Kuhn (1970, p. 94) was unable to give a clear answer to the question of the sense in which a paradigm can be said to be superior to the one it replaces, and [|] so left him with no option but to appeal to the authority of the scientific community. Later paradigms are superior to their predecessors because the scientific community judges them to be so, and "there is no standard higher than the assent of the relevant community". (Chalmers 1999: 137-138)
The consensus within the community of inquirers.
But once this move is taken, it is clear that there can be no one-the-spot advice forthcoming from Lakatos's methodology along the lines that scientists must give up a research program, or prefer a particular research program to its rival. It is not irrational or necessarily misguided for a scientist to remain working on a degenerating program if he or she thinks there are possible ways to bring it to life again. It is only in the long term (that is, from a historical perspective) that Lakatos's methodology can be used to meaningfully compare research programs. (Chalmers 1999: 144)
Just because "phatic communion" had a misguided beginning and has only became more ambiguous in successive conceptual iterations does not necessarily mean that it cannot be put in order and operationalized.
People and societies cannot in general be treated in this way without destroying what it is that is being investigated. A great deal of complexity is necessary for living systems to function as such, so even biology can be expected to exhibit some important differences from physics. In social sciences the knowledge that is produced itself forms an important component of the systems being studied. So, for example, economic theories can effect the way in which individuals operate in the market place, so that a change in theory can bring about a change in the economic system being studied. This is a complication that does not apply in the physical sciences. (Chalmers 1999: 147)
"The history of a system is in turn a system" (Jakobson & Tynjanov 1981[1928d]: 4); "Scientific investigation is not only an instrument for the study of culture but is also part of its object" (Lotman et al. 2013[1973]: 77).
The following passage from Galileo's Dialogue Concerning the Two Chief World Systems (1967), cited by Feyerabend (1975, pp. 100-101), indicates that Galileo thought otherwise.You wonder that there are so few followers of the Pythagorean opinion [that the earth moves] while I am astonished that there have been any up to this day who have embraced and followed it. Nor can I ever sufficiently admire the outstanding acumen of those who have taken hold of this opinion and accepted it as true: they have, through sheer force of intellect done such violence to their own senses as to prefer what reason told them over that which sensible experience plainly showed them to the contrary. For the arguments against the whirling of the earth we have already examined are very plausible, as we have seen: and the fact that the Ptolemaics and the Aristotelians and all their disciples took them to be conclusive is indeed a strong argument of their effectiveness. But the experiences which overtly contradict the annual movement are indeed so much greater in their apparent force that, I repeat, there is no limit to my astonishment when I reflect that Aristarchus and Copernicus were able to make reason so conquer sense that, in defiance of the latter, the former became mistress of their belief.Far from accepting the facts considered to be borne out by [|] the senses by his contemporaries, it was necessary for Galileo (1967, p. 328) to conquer sense by reason and even to replace the senses by "a superior and better sense", namely the telescope. (Chalmers 1999: 151-152)
I am yet again baffled by what else can be called Pythagorean.
Kuhn avoided Feyerabend's anarchistic conclusions essentially by appealing to social consensus to restore law and order. Feyerabend (1970) rejected Kuhn's appeal to the social consensus of the scientific community, partly because he did not think Kuhn distinguished between legitimate and illegitimate ways (for example by killing all opponents) of achieving consensus, and also because he did not think the appeal to consensus was capable of distinguishing between science and other activities such as theology and organised crime. (Chalmers 1999: 155)
Paul Feyerabend looks like an interesting figure in his own right.
One of Galileo's Aristotelian opponents (cited in Galileo, 1967, p. 248) referred to the idea that "the senses and experience should be our guide in philosophising" as "the criterion of science itself". A number of commentators on the Aristotelian tradition have noted that it was a key principle within that tradition that knowledge claims should be compatible with the evidence of the senses when they are used with sufficient care under suitable conditions. Ludovico Geymonat (1965, p. 45), a biographer of Galileo, refers to the belief "shared by most scholars at the time [of Galileo's innovations]" that "only direct vision has the power to grasp actual reality". [...] A teleological defence of this fundamental standard was common. The function of the senses was understood to be to provide us with information about the world. Therefore, although the senses can mislead in abnormal circumstances, for instance in a mist or when the observer is sick or drunk, it makes no sense to assume that the senses can be systematically misleading when they are fulfilling the task for which they are intended. (Chalmers 1999: 163)
That's the thing though - no-one intended our senses. They were not created, they evolved.
These were the theories at stake, and the appreciation of Faraday's motor effect was not "theory dependent" in the sense that an apprecication of it depended on the acceptance of or familiarity with some version of one of the rival theories. Within electromagnetism at the time Faraday's motor constituted an experimentally established theory-neutral effect which all electromagnetic theories were obliged to take account of. (Chalmers 1999: 196)
An example of an experiment with a life of its own, meaning independence from theory.
According to Popper, astrology is not a science because it is unfalsifiable. Kuhn points out that this is inadequate because astrology was (and is) falsifiable. In the sixteenth and seventeenth centuries, when astrology was "respectable", astrologers did make testable predictions, many of which turned out to be false. Scientific theories make predictions that turn out to be false too. The difference, according to Kuhn, is that science is in a position to learn constructively from the "falsifications", whereas astrology was not. For Kuhn, there exists in normal science a puzzle-solving tradition that astrology lacked. There is more to science than the falsification of theories. There is also the way in which falsifications are constructively overcome. It is ironic, from this point of view, that Popper, who at times characterised his own approach with the slogan "we learn from our mistakes", failed precisely because his negative, falsificationist account did not capture an adequate, positive account of how science learns from mistakes (falsifications). (Chalmers 1999: 203)
Falsifying a theory is not much use if nothing is learned from it; one can go on falsifying astrology for ever.
The notion of a law originates in the social sphere where it makes straightforward sense. Society's laws are obeyed or not obeyed by individuals who can comprehend the laws and the consequences of violating them. But once laws are understood in this natural way, how can it be said that material systems in nature obey laws? For they can hardly be said to be in a position to comprehend the laws they are meant to obey, and, in any case, a fundamental law as it applies in science is supposed to be exceptionless, so there is no correlate to an individual's violating a social law and taking the consequences. (Chalmers 1999: 213)
Ah, the distinction between descriptive and prescriptive rules. Society's rules are prescribed, whereas laws of science describe how the world behaves.
The majority of philosophers seem reluctant to accept an ontology which includes dispositions or powers as primitive. I do not understand their reluctance. Perhaps the reasons are in part historical. Powers were given a bad name by the mystical and obscure way they were employed in the magical tradition in the Renaissance, and they are alleged to have been exploited by the Aristotelians in a cavalier way under the guise of forms. (Chalmers 1999: 219)
It is becoming more and clear to me that I am much more interested in the mystical and obscure than the scientific.
Global anti-realism, as I will call it, raises the question of how language of any kind, including scientific language, can engage with, or hook onto, the world. Its defenders observe that we have no way of coming face to face with reality to read off facts about it, by way of perception or in any other way. We can view the world only from our humanly generated perspectives and describe it in the language of our theories. We are forever trapped within language and cannot break out of it to describe reality "directly" in a way that is independent of our theories. Global [|] anti-realism denies we have access to reality in any way, and not just within science. (Chalmers 1999: 227-228)
The prison of language cannot be broken out of; there is no-where to go.
The logician Alfred Tarski demonstrated how, for a reasonably simple language system, paradoxes can be avoided. The crucial step was his insistence that, when one is talking of the truth or falsity of the sentences in some language, one must carefully distinguish sentences in the language system that is being talked about, the "object language", from sentences in the language system in which talk about the object language is carried out, the "metalanguage". Referring to the paradox involving the card, if we adopt Tarski's recommendation then we must decide whether each sentence on the card is in the language being talked about or in the language in which the talking is being done. If one follows the rule that each of the sentences must be in either the object or the metalanguage but not in both, then neither sentence can both refer to the other and be referred to by the other, and no paradoxes arise. (Chalmers 1999: 229)
Quite understandably explained. I have never cared to look up what Tarski's metalanguage (whence Jakobson admitted deriving his "metalingual" function, as opposed to the designation he used beforehand, "autonymous") was about but am glad to have this explanation.
In the closing decades of the nineteenth century Duhem, along with other notable anti-realists such as Ernst Mach and Wilhelm Ostwald, refused to take the atomic theory literally. It was their view that unobservable atoms either have no place in science or, if they do, should be treated merely as useful fictions. The vindication of the atomic theory to the satisfaction of the vast majority of scientists (including Mach and Ostwald, but not Duhem) by 1910 is taken by realists to have demonstrated the falsity, and the sterility, of anti-realism. (Chalmers 1999: 237)
Oops the useful fictions proved to be real.
0 comments:
Post a Comment