Theory and Reality: An Introduction to the Philosophy of Science

Theory and Reality: An Introduction to the Philosophy of Science

"The scientific method is the most reliable way of discovering the truth." That's how I responded when a friend recently challenged me to state my core beliefs. As I ranted about Popper on "falsifiability" and our conversation descended into the depths of definitional disputes, I realized that I actually didn't have a clear definition for what "the scientific method" actually was. That was a problem. Thanks to Godfrey-Smith, it was a problem with a solution... at least sort of.

I read "Theory and Reality" to gain a more nuanced understanding of the philosophy and sociology of science. In particular, I wanted to get an answer to "What level of confidence should we have in our current theories, given the dramatic history of change in science?" I finished the book without a clear answer, but with a much better understanding of why this is such a rich and difficult question.

The first half of the book traces the history of big ideas and controversies within the philosophy of science. The presentation of the giants of the field - Popper, Kuhn, Lakatos, Feyerabend, and Laudan - is particularly good because Godfrey-Smith contextualizes how they all knew each other and how their ideas were related. He concludes each chapter with a thoughtful list of additional reading too - already added a bunch to my list. The second half explores more recent developments and is less clear. This may be because it's too early to have a clear sense of what the major important stories of our time are. His own theory presented in the final chapter is a muddled mess. Still, this book significantly revised some of my fundamental beliefs and for that alone it was well worth the read.

Godfrey-Smith devotes an entire early chapter to Karl Popper because of his fame ("hardly ever does a philosopher succeed in inspiring scientists in the way Popper has") and devastatingly dismantles Popper's philosophical framework brick-by-brick. I had been deeply attracted to Popper's "weaponization" of falsifiability as a way to distinguish science from pseudo-science. But Godfrey-Smith raises major issues with Popper's approach. The first is that Popper refuses to let us say that we are increasing our confidence in a particular theory. The second is that falsifiability is built on a foundation of sand - any experiment relies on a vast web of assumptions and a "falsified" theory can always claim that one of the other assumptions is wrong. Popper also claims that any probabilistic model is unscientific because it is impossible to falsify probabilities - this would classify vast swathes of modern science as pseudo-science. So Popper seems to fail to describe how science actually works in practice. This shocked me and made me question my previous faith in Sir Karl.

But Godfrey-Smith doesn't completely dismiss Popper. He uses Popper's ideas to discuss the idea that the philosophy of science still has no good way to "confirm" theories, rendering all scientific knowledge tentative and subject to revision. And he does give Karl credit:

What is Popper's single most important and enduring contribution to philosophy of science? I'd say it is his use of the idea of "riskiness" to describe the kind of contact that scientific theories have with observation

He also ties Popper back to earlier empiricist thinking on the problem of scientific induction. I was surprised to read that:

Hume asked, "What reason do we have for thinking that the future will resemble the past?" Induction is psychologically natural to us. Despite this, Hume thought it had no rational basis. Hume's inductive skepticism has haunted empiricism ever since.

So we don't have any strictly logical basis for believing that induction - the foundation of almost all scientific fields - actually works?! This was the first time I had ever even thought of this - seems like a major problem.

From Popper and empiricism, Godfrey-Smith proceeds to Kuhn and his "Structure of Scientific Revolutions." I had read Kuhn's book before (my review here) and like Lakatos, I was concerned that Kuhn was promoting "mob rule" and relativism in science. I was particularly concerned about claims that "people in different paradigms will use different standards of evidence and argument." Godfrey-Smith does a respectable job of defending Kuhn from these accusations, but his argument was a bit fuzzy because Kuhn himself didn't have great responses to some of these questions. The closest we get is:

So if we want to compare scientific procedures of investigation with nonscientific ones, it is clear that Kuhn thought science was superior. He was not a relativist about this issue, and perhaps that is the most important issue.

But since Kuhn doesn't have a clear way to distinguish science from "non-science", I remain unconvinced. This feels like a major hole in Kuhn's approach, but I'm sure my understanding of his theories is incomplete. Later on, Godfrey-Smith dismisses much of the "Science Studies" and "Sociology of Science" fields which he thinks actually have slipped down the slippery slope of relativism. As he puts it, "There seems to be no place in the picture for the responsiveness of scientific belief to the real structure of the world being investigated." Savage.

I did appreciate Godfrey-Smith's discussion about how open science should be to evaluating the fundamentals. He explains that a counter-intuitive benefit of Kuhn's "paradigm" approach is that certain areas are off-limits for questioning. Unlike Popper who thinks it is a scientist's duty to question everything all the time, Kuhn correctly says that this would result in no research ever getting done. By focusing the science community's efforts on a defined set of problems, the paradigm approach makes the research enterprise more efficient.

Godfrey-Smith devotes much of his discussion of the sociology of science to the efficiency of overall scientific progress and the incentives of individual scientists. This section was fascinating to me because these issues directly impact the integrity of the overall scientific endeavor. He drew very interesting parallels between the organization of science and Smith's "Invisible Hand" that organizes the economy. And while economics has its market failures, scientific research has its own pathologies. Sadly, Godfrey-Smith only briefly discusses how the incentive structure of modern science can encourage fraud (and the mania to publish). He briefly touches on the relatively recent development of financial rewards for the commercialization of scientific research, but fails to explore this new and important issue.

Instead, Godfrey-Smith mostly focuses on the question of how individual scientists choose which problems to tackle - largely a matter of the rate of progress in the field and how many scientists are already in the community. Knowing many young scientists myself, I found this section to be missing a critical component of the decision-making process. There is no discussion of the role of passion, intellectual curiosity, or any other intrinsic motivation. Many of the researchers I know would appreciate recognition for their work, but they chose their field because they loved the thrill of hunting for new stars or spending time with animals or trying to cure a disease that killed a loved one. These factors are completely ignored in this book.

Overall though, this book gave me lots to think about and some fruitful directions for future exploration. I particularly liked Godfrey-Smith's idea that "The power of science is seen in the cumulative and coordinated nature of scientific work." But as he stresses, the cumulative nature of the work means that it's critical to prevent fraud from destabilizing the work that builds upon it. Godfrey-Smith says that one of the most special parts of science is its ability to "balance between criticism and trust." Are we sure that things are still in balance? As I read Godfrey-Smiths' brief passages on scientific fraud, I was reminded of Charlie Munger's exhortation that the best way to make people behave morally is to make systems that are hard to cheat. Modern science strikes me as particularly easy to cheat - especially in the short term. There are very few incentives (or resources) to reproduce even a tiny fraction of the experiments published each year, and there are significant rewards for claiming a novel discovery. I hope to read more about these issues, and this book has given me a good initial framework for thinking about the modern scientific endeavor.

My highlights below.


1 - Introduction

The Science Wars eventually cooled down, but now, as I write these words, it is fair to say that there is still a great deal of disagreement about even the most basic questions concerning the nature and status of scientific knowledge.

Around the seventeenth century, when modern science began its rise, the fields that we would now call science were more usually called "natural philosophy" (physics, astronomy, and other inquiries into the causes of things) or "natural history" (botany, zoology, and other descriptions of the contents of the world). Over time, the term "science" came to be used for work with closer links to observation and experiment, and the association between science and an ideal of conclusive proof receded. The current senses of the term "science" and the associated word "scientist" are products of the nineteenth century.

However we choose to use the word "science," in the end we should try to develop both

  1. a general understanding of how humans gain knowledge of the world around them and
  2. an understanding of what makes the work descended from the Scientific Revolution different from other kinds of investigation of the world.

Epistemology is the side of philosophy that is concerned with questions about knowledge, evidence, and rationality. Metaphysics, a more controversial part of philosophy, deals with general questions about the nature of reality. Philosophy of science overlaps with both of these.

When assessing general claims about science, it is a good principle to constantly ask: "Is this claim intended to be descriptive or normative, or both?"

Another famous phrase is "scientific method." Perhaps this is what most people have in mind when they imagine giving a general theory of science. The idea of describing a special method that scientists do or should follow is old. In the seventeenth century, Francis Bacon and Rene Descartes, among others, tried to give detailed specifications of how scientists should proceed. Although describing a special scientific method looks like a natural thing to try to do, during the twentieth century many philosophers and others became skeptical about the idea of giving anything like a recipe for science. Science, it was argued, is too creative and unpredictable a process for there to be a recipe that describes it - this is especially true in the case of great scientists like Newton, Darwin, and Einstein. For a long time it was common for
science textbooks to have an early section describing "the scientific method;' but recently textbooks seem to have become more cautious about this.

If looking for a recipe is too simplistic, and looking for a logical theory is too abstract, what might we look for instead? Here is an answer that will be gradually developed as the book goes on: we can try to describe the scientific strategy for investigating the world.

In this section I will introduce three different answers to our general questions about how science works.

The first of the three ideas is empiricism.

Some readers are probably thinking that these empiricist principles are empty platitudes. Of course experience is the source of knowledge about the world - what else could be?
For those who suspect that basic empiricist principles are completely trivial, an interesting place to look is the history of medicine. The history of medicine has many examples of episodes where huge breakthroughs were made by people willing to make very basic empirical tests - in the face of much skepticism, condescension, and opposition from people who "knew better."

Putting the point in plainer language, here is the second of the three ideas. Mathematics and Science: What makes science different from other kinds of investigation, and especially successful, is its attempt to understand the natural world using mathematical tools.

In this book the role of mathematics will be a significant theme but not a central one. This is partly because of the history of the debates surveyed in the book, and partly because mathematical tools are not quite as essential to science as Galileo thought. Although mathematics is clearly of huge importance in the development of physics, one of the greatest achievements in all of science - Darwin's achievement in On the Origin of Species ([11859] 1964) - makes no real use of mathematics.

The third of the three families of ideas is newer. Maybe the unique features of science are only visible when we look at scientific communities. Social Structure and Science: What makes science different from other kinds of investigation, and especially successful, is its unique social structure.

So trust and cooperation are essential to science. But who can be trusted? Who is a reliable source of data? Shapin argues that when we look closely, a great deal of what went on in the Scientific Revolution had to do with working out new ways of policing, controlling, and coordinating the actions of groups of people in the activity of research. Experience is everywhere. The hard thing is working out which kinds of experience are relevant to the testing of hypotheses, and working out who can be trusted as a source of reliable and relevant reports.

In the mid-seventeenth century we also see the rise of scientific societies in London, Paris, and Florence. These societies were intended to organize the new research and break the institutional monopoly of the (often conservative) universities.

2 - Logic Plus Empiricism

Both during these classical discussions and more recently, a problem for empiricism has been a tendency to lapse into skepticism, the idea that we cannot know anything about the world.

In discussions of the history of philosophy, it is common to talk of a showdown in the seventeenth and eighteenth centuries between "the rationalists" and "the empiricists." Rationalists like Descartes and Leibniz believed that pure reasoning can be a route to knowledge that does not depend on experience. Mathematics seemed to be a compelling example of this kind of knowledge.

The Vienna Circle was established by Moritz Schlick and Otto Neurath. It was based, as you might expect, in Vienna, Austria. From the early days through to the end, a central intellectual figure was Rudolf Carnap. Carnap seems to have been the kind of person whose presence inspired awe even in other highly successful philosophers.

Logical positivism was a plea for Enlightenment values, in opposition to mysticism, romanticism, and nationalism. The positivists championed reason over the obscure, the logical over the intuitive. The logical positivists were also internationalists, and they liked the idea of a universal and precise language that everyone could use to communicate clearly.

Logical positivist ideas were imported into England by A. J. Ayer in Language, Truth, and Logic (1936), a vivid and readable book that conveys the excitement of the time.

The logical positivists who did make it to the United States were responsible for a great flowering of American philosophy in the years after World War II. These include Rudolf Carnap, Hans Reichenbach, Carl Hempel, and Herbert Feigl.

Logical positivist views about science and knowledge were based on a general theory of language; we need to start here, before moving to the views about science. This theory of language featured two main ideas, the analytic-synthetic distinction and the verifiability theory of meaning.

Although the distinction itself looks uncontroversial, it can be made to do real philosophical work. Here is one crucial piece of work the logical positivists saw for it: they claimed that all of mathematics and logic is analytic.

Earlier philosophers in the rationalist tradition had claimed that some things can be known a priori; this means known independently of experience. Logical positivism held that the only things that seem to be knowable a priori are analytic and hence empty of factual content.

I turn now to the other main idea in the logical positivist theory of language, the verifiability theory of meaning. This theory applies only to sentences that are not analytic, and it involves a specific kind of "meaning," the kind involved when someone is trying to say something about the world. Here is how the theory was often put: the meaning of a sentence consists in its method of verification. That formulation might sound strange (it always has to me). Here is a formulation that sounds more natural: knowing the meaning of a sentence is knowing how to verify it. And here is a key application of the principle: if a sentence has no possible method of verification, it has no meaning. By "verification" here, the positivists meant verification by means of observation. Observation in all these discussions is construed broadly, to include all kinds of sensory experience. And "verifiability" is not the best word for what they meant. A better word would be "testability." This is because testing is an attempt to work out whether something is true or false, and that is what the positivists had in mind. The term "verifiable" generally only applies when you are able to show that something is true. It would have been better to call the theory "the testability theory of meaning." Sometimes the logical positivists did use that phrase, but the more standard name is "verifiability theory," or just "verificationism."

The verifiability principle was used by the logical positivists as a philosophical weapon.

For logical positivism, logic is the main tool for philosophy, including philosophical discussion of science.

There is no alternative route to knowledge besides experience; when traditional philosophy has tried to find such a route, it has lapsed into meaninglessness. The interpretation of logical positivism I have just given is a standard one.

The criticism that I will focus on here is one of these, and its most famous presentation is in a paper sometimes regarded as the most important in all of twentieth-century philosophy: W. V. Quine's "Two Dogmas of Empiricism" (1953). Quine argued for a holistic theory of testing, and he used this to motivate a holistic theory of meaning as well.

In the first published paper that introduced logical positivism, Carnap, Hahn, and Neurath said: "In science there are no `depths'; there is surface everywhere". This is a vivid expression of the empiricist aversion to a view in which the aim of theorizing is to describe hidden levels of structure.

So the logical positivists and the logical empiricists talked constantly about prediction as the goal of science. Prediction was a substitute for the more obvious-looking - but ultimately forbidden - goal of describing the real hidden structure of the world.

3 - Induction and Confirmation

In this chapter we begin looking at a very important and difficult problem, the problem of understanding how observations can confirm a scientific theory. What connection between an observation and a theory makes that observation evidence for the theory? In some ways, this has been the fundamental problem in the last hundred years of philosophy of science.

The confirmation of theories is closely connected to another classic issue in philosophy: the problem of induction. What reason do we have for expecting patterns observed in our past experience to hold also in the future? What justification do we have for using past observations as a basis for generalization about things we have not yet observed? The most famous discussions of induction were written by the eighteenth-century Scottish empiricist David Hume. Hume asked, What reason do we have for thinking that the future will resemble the past?

Induction is psychologically natural to us. Despite this, Hume thought it had no rational basis. Hume's inductive skepticism has haunted empiricism ever since. The problem of confirmation is not the same as the classical problem of induction, but it is closely related.

(And a note to mathematicians: mathematical induction is really a kind of deduction, even though it has the superficial form of induction.)

So I will recognize two main kinds of nondeductive inference, induction and explanatory inference (plus projection, which is closely linked to induction). The problem of analyzing confirmation, or the problem of analyzing evidence, includes all of these.

Explanatory inference seems much more common than induction within actual science. In fact, you might be wondering whether science contains any inductions of the simple, traditional kind. That suspicion is reasonable, but it might go too far. Science does contain inferences that look like traditional inductions, at least on the face of them.

The term hypothetico-deductivism is used in several ways by people writing about science. Sometimes it is used to describe a simple view about testing and confirmation. According to this view, hypotheses in science are confirmed when their logical consequences turn out to be true.

People do often regard a scientific hypothesis as supported when its consequences turn out to be true; this is taken to be a routine and reasonable part of science. But when we try to summarize this idea using simple logic, it seems to fall apart.

Goodman's point is that two inductive arguments can have the exact same form, but one argument can be good while the other is bad. So what makes an inductive argument a good or bad one cannot be just its form. Consequently, there can be no purely formal theory of induction and confirmation.

That concludes our initial foray into the problems of induction and confirmation. These problems are simple, but they are very resistant to solution.

4 - Popper: Conjecture and Refutation

Karl Popper is the only philosopher discussed in this book who is regarded as a hero by many scientists. Attitudes toward philosophy among scientists vary, but hardly ever does a philosopher succeed in inspiring scientists in the way Popper has.

Popper's appeal is not surprising. His view of science is centered around a couple of simple, clear, and striking ideas. His vision of the scientific enterprise is a noble and heroic one. Popper's theory of science has been criticized a great deal by philosophers over the years. I agree with many of these criticisms and don't see any way for Popper to escape their force.

The logical positivists developed their theory of science as part of a general theory of language, meaning, and knowledge. Popper was not much interested in these broader topics, at least initially; his primary aim was to understand science. As his first order of business, he wanted to understand the difference between scientific theories and nonscientific theories. In particular, he wanted to distinguish science from "pseudo-science."

For Popper, an inspiring example of genuine science was the work of Einstein. Examples of pseudo-science were Freudian psychology and Marxist views about society and history.

Popper called the problem of distinguishing science from non-science the "problem of demarcation." All of Popper's philosophy starts from his proposed solution to this problem. "Falsificationism" was the name Popper gave to his solution. Falsificationism claims that a hypothesis is scientific if and only if it has the potential to be refuted by some possible observation. To be scientific, a hypothesis has to take a risk, has to "stick its neck out." If a theory takes no risks at all, because it is compatible with every possible observation, then it is not scientific.

And crucially, for Popper it is never possible to confirm or establish a theory by showing its agreement with observations. Confirmation is a myth. The only thing an observational test can do is to show that a theory is false. So the truth of a scientific theory can never be supported by observational evidence, not even a little bit, and not even if the theory makes a huge number of predictions that all come out as expected.

Skepticism about induction and confirmation is a much more controversial position than Popper's use of falsification to solve the demarcation problem. Most philosophers of science have thought that if induction and confirmation are just myths, that is very bad news for science. Popper tried to argue that there is no reason to worry; induction is a myth, but science does not need it anyway. So inductive skepticism, for Popper, is no threat to the rationality of science.

However, almost all philosophers of science accept that we can never be 100 percent certain about factual matters, especially those discussed in science. This position, that we can never be completely certain about factual issues, is often known as fallibilism (a term due to C. S. Peirce). Most philosophers of science accept fallibilism. The harder question is whether or not we can be reasonable in increasing our confidence in the truth of a theory when it passes observational tests. Popper said no. The logical empiricists and most other philosophers of science say yes.

Popper also used the idea of falsification to propose a theory of scientific change. Popper's theory has an appealing simplicity. Science changes via a two-step cycle that repeats endlessly. Stage 1 in the cycle is conjecture - a scientist will offer a hypothesis that might describe and explain some part of the world. A good conjecture is a bold one, one that takes a lot of risks by making novel predictions. Stage 2 in the cycle is attempted refutation- the hypothesis is subjected to critical testing, in an attempt to show that it is false. Once the hypothesis is refuted, we go back to stage 1 again - a new conjecture is offered. That is followed by stage 2, and so on.

This problem is a reappearance of an issue discussed in chapter 2: holism about testing. Whenever we try to test a theory by comparing it with observations, we must make a large number of additional assumptions in order to bring the theory and the observations into "contact" with each other.

This is a problem not just for Popper's solution to the demarcation problem, but for his whole theory of science as well. Popper was well aware of this problem, and he struggled with it. He regarded the extra assumptions needed to connect theories with testing situations as scientific claims that might well be false - these are conjectures too. We can try to test these conjectures separately. But Popper conceded that logic itself can never force a scientist to give up a particular theory, in the face of surprising observations. Logically, it is always possible to blame other assumptions involved in the test. Popper thought that a good scientist would not try to do this; a good scientist is someone who wants to expose the theory itself to tests and will not try to deflect the blame.

This point about the role of decisions affects Popper's ideas about demarcation as well as his ideas about testing. Any system of hypotheses can be held onto despite apparent falsification, if people are willing to make certain decisions.

Popper's response was to accept that, logically speaking, all hypotheses of this kind are unscientific. But this seems to make a mockery of the important role of probability in science. So Popper said that a scientist can decide that if a theory claims that a particular observation is extremely improbable, the theory in practice rules out that observation. So if the observation is made, the theory is, in practice, falsified. According to Popper, it is up to scientists to work out, for their own fields, what sort of probability is so low that events of that kind are treated as prohibited. So probabilistic theories can only be construed as falsifiable in a special "in practice" sense. And we have here another role for "decisions" in Popper's philosophy of science, as opposed to the constraints of logic.

Popper refuses to say that when a theory passes tests, we have more reason to believe that the theory is true. Both the untested theory and the well-tested theory are just conjectures. But Popper did devise a special concept to use in this situation. Popper said that a theory that has survived many attempts to falsify it is "corroborated." And when we face choices like the bridge-building one, it is rational to choose corroborated theories over theories that are not corroborated.

The idea that we can gradually increase our confidence that a theory is true is an idea that Popper rejected.

What is Popper's single most important and enduring contribution to philosophy of science? I'd say it is his use of the idea of "riskiness" to describe the kind of contact that scientific theories have with observation.

Popper's formulation is valuable because it captures the idea that theories can appear to have lots of contact with observation when in fact they only have a kind of "pseudo-contact" with observation because they are exposed to no risks. This is an advance in the development of empiricist views of science. Popper's analysis of how this exposure works does not work too well, but the basic idea is good.

But if a hypothesis is handled in a way that keeps it apart from all the risks associated with observation, that is an unscientific handling of the idea.

What observations would lead scientists to give up current versions of evolutionary theory? A one-line reply that biologists sometimes give to this question is "a Precambrian rabbit."

5 - Kuhn and Normal Science

In this chapter we encounter the most famous book about science written during the twentieth century - The Structure of Scientific Revolutions, by Thomas Kuhn. Kuhn's book was first published in 1962, and its impact was enormous.

A common way of describing the importance of Kuhn's book is to say that he shattered traditional myths about science, especially empiricist myths. Kuhn showed, on this view, that actual scientific behavior has little to do with traditional philosophical theories of rationality and knowledge.

But what is a paradigm? The short answer is that a paradigm, in Kuhn's theory, is a whole way of doing science, in some particular field. It is a package of claims about the world, methods for gathering and analyzing data, and habits of scientific thought and action. In Kuhn's theory of science, the big changes in how scientists see the world - the "revolutions" that science undergoes every now and then - occur when one paradigm replaces another.

And although Kuhns theory is the inspiration for all the talk about paradigm shifts that one hears, Kuhn only occasionally used the phrase "paradigm shift."

For Popper, science is characterized by permanent openness, a permanent and all-encompassing critical stance, even with respect to the fundamental ideas in a field. Other empiricist views will differ on the details here, but the idea of science as featuring permanent openness to criticism and testing is common to many versions of empiricism. Kuhn disagreed. He argued that it is false that science exhibits a permanent openness to the testing of fundamental ideas. Not only that, but science would be worse off if it had the kind of openness that philosophers have treasured.

For Popper, all science proceeds via a single process, the process of conjecture and refutation. There can still be episodes called "revolutions" in such a view, but revolutions are just different in degree from what goes on the rest of the time; they involve bigger conjectures and more dramatic refutations. For Kuhn, there are two distinct kinds of scientific change: change within normal science, and revolutionary science. (These are bridged by "crisis science;' a period of unstable stasis.)

Even if we leave aside the details of Kuhn's claims, this strategy of argument was controversial and influential. Kuhn addressed philosophical questions about reason and evidence via an examination of history. As we saw in chapter 2, the logical empiricists made a sharp distinction between questions about the history and psychology of science, on the one hand, and questions about evidence and justification, on the other. Kuhn was deliberately mixing together things that the logical empiricists had insisted should be kept apart. One of the reasons that Kuhn was interpreted as a "destroyer" of logical empiricism was that Kuhn's work seemed to show how interesting it is to connect philosophical questions about science with questions about the history of science.

I think Kuhn had a very definite picture of how science should work and of what can cause harm to science. In fact, it is here that we find what I regard as the most fascinating feature of The Structure of Scientific Revolutions. This is the relationship between 1. Kuhn's constant emphasis on the arbitrary, personal nature of factors often influencing scientific decisions, the rigidity of scientific indoctrination of students, the "conceptual boxes" that nature gets forced into by scientists .... and 2. Kuhn's suggestion that these features are actually the key to science's success - without them, there is no way for scientific research to proceed as effectively as it does. Kuhn is saying that without the factors referred to in (2), we would not have the most valuable and impressive features of science.

In general, a key part of Kuhn's theory is the principle one paradigm per field per time.

A paradigm's role is to organize scientific work; the paradigm coordinates the work of individuals into an efficient collective enterprise. A key feature that distinguishes normal science from other kinds of science for Kuhn is the absence of debate about fundamentals.

Any "closing off" of debate is bad news according to Popper. Popper criticized Kuhn explicitly on this point; Popper said that although "normal science" of Kuhn's kind does occur, it is a bad thing that it does.

Some might find this militaristic analogy unpleasant, but I think it captures a lot of what Kuhn says. Kuhn's story is guided by his claim that all paradigms constantly encounter anomalies. For a Popperian view, or for other simpler forms of empiricism, these anomalies should count as "refutations" of the theory. But Kuhn thinks that science does not treat these constantly arising anomalies as refutations, and also that it should not. If scientists dropped their paradigms every time a problem arose, they would never get anything done. Much of the secret of science, for Kuhn, is the remarkable balance it manages to strike between being too resistant to change in basic ideas, and not being resistant enough.

The idea that a willingness to revise ideas in response to observation can go too far is unexpected from the point of view of empiricist philosophy. And Kuhn supported this claim with a mass of evidence from the history of science.

For Kuhn, a constant questioning and criticism of basic beliefs is liable to result in chaos -
in the partially "random" fact-gathering and speculation that we see in pre-paradigm science. But here again, Kuhn probably goes too far. He does not take seriously the possibility that scientists could agree to work together in a coordinated way, not wasting time on constant discussion of fundamental issues, while retaining a cautious attitude toward their paradigm. Surely this is possible.

Another reason for the breakdown also relates to Kuhn. The field of Alife suffered from a kind of "premature commercialization." It was realized early on that some of the work had great potential for animation and other kinds of commercial art.

For Kuhn, science depends on the good normal scientist's keen interest in puzzle-solving for its own sake. Looking outside the paradigm too often to applications and external rewards is not good for normal science.

6 - Kuhn and Revolutions

The most famous, most striking, and most controversial parts of Kuhns book were his discussions of scientific revolutions.

We do not find pure falsifications, rejections of one paradigm without simultaneous acceptance of a new one. Rather, the rejection of one paradigm accompanies the acceptance of another.

But taking another biological example, if the appearance of genetics as a science around 1900 was a revolution, it is very hard to find a crisis in the work on inheritance that preceded it.

The sudden appearance of problem-solving power is the spark to the revolution.

Kuhn did not argue that traditional philosophical ideas about how theories should relate to evidence are completely misguided. He made it clear in his later work that there are some core ways of assessing theories that are common to all paradigms. Theories should be predictively accurate, consistent with well-established theories in neighboring fields, able to unify disparate phenomena, and fruitful of new ideas and discoveries. These principles, along with other similar ones, "provide the shared basis for theory choice"

Science for Kuhn is a social mechanism that combines two capacities. One is the capacity for sustained, cooperative work. The other is science's capacity to partially break down and reconstitute itself from time to time.

This question connects us to one of the most famous topics in Kuhn's work, the idea that different paradigms in a field are incommensurable with each other.

There are two reasons for this-there are (roughly speaking) two aspects of the problem of incommensurability. First, people in different paradigms will not be able to fully communicate with each other; they will use key terms in different ways and in a sense will be speaking slightly different languages. Second, even when communication is possible, people in different paradigms will use different standards of evidence and argument. They will not agree on what a good theory is supposed to do.

Neither the holists nor anyone else has had much success in developing a good theory of meaning for scientific language. This is a confusing and unresolved area. However, a different kind of criticism of Kuhn is possible here. If incommensurability of meanings is real, as Kuhn says, then it should be visible in the history of science.

Scientists are often adept at "scientific bilingualism," switching from one framework to another. And they are often able to improvise ways of bridging linguistic gaps, much as traders from different cultures are able to, by improvising "pidgin" languages (Galison 1997).

Kuhn's view is that there is no general answer to the question of whether scientific theories should give causal mechanisms for phenomena; this is the kind of principle that will be present in one paradigm and absent from another.

In the latter part of the nineteenth century, a group of biologists called the "Biometricians" had formulated a mathematical law that they thought described inheritance.

Kuhn's discussion of incommensurability is the main reason why his view of science is often referred to as "relativist." Kuhn's book is often considered one of the first major steps in a tradition of work in the second half of the twentieth century that embraced relativism about science and knowledge. Kuhn himself was shocked to be interpreted this way.

If our later paradigms have more overall problem-solving power than our earlier ones, then it seems that we are entitled to regard the later ones as superior. This takes us away from relativism. Clearly Kuhn's aim was to work out an intermediate or moderate position. People will be arguing about this for a while to come.

Kuhn thought that the overall structure of modern scientific investigation gives us a uniquely efficient way of studying the world. So if we want to compare scientific procedures of investigation with nonscientific ones, it is clear that Kuhn thought science was superior. He was not a relativist about this issue, and perhaps that is the most important issue.

Like Popper and others, Kuhn seems to have been hugely influenced by the fall of the Newtonian picture of the world at the start of the twentieth century.

Many parts of Kuhn's mechanism are especially hard to apply to the history of biology, which Kuhn did not much discuss.

Kuhn's theory is nothing like this. His theory of science emphasizes the differences between science, narrowly construed, and various other kinds of empirical learning and problem-solving. Science is a form of organized behavior with a specific social structure, and science seems only to thrive in certain kinds of societies. As a consequence, science appears in this story as a rather fragile cultural achievement; subtle changes in the education, incentive structure, and political situation of scientists could result in the loss of the special mechanisms of change that Kuhn described.

First, in some ways Kuhns view of science has an "invisible hand" structure. The Scottish political and economic theorist Adam Smith argued in the Wealth of Nations that individual selfishness in economic behavior leads to good outcomes for society as a whole.

We see something similar in Kuhn's theory of science: narrow-mindedness and dogmatism at the level of the individual lead to intellectual openness at the level of science as a whole. Anomaly and crisis produce such stresses in the normal scientist that an especially wholesale openness to novelty is found in revolutions.

The analogy with Kuhn's theory of science is striking. We have the same long periods of stability and resistance to change, punctuated by unpredictable, rapid change to fundamentals.
The theory of punctuated equilibrium in biology
was controversial for a time, especially because it was sometimes presented by Gould in rather radical forms (Gould 1980).

7 - Lakatos, Laudan, Feyerabend, and Frameworks

First, we will look at the views of Imre Lakatos. Lakatos's main contribution was the idea of a research program. A research program is similar to a paradigm in Kuhns (broad) sense, but it has a key difference: we expect to find more than one research program in a scientific field at any given time. The large-scale processes of scientific change should be understood as competition between research programs.

Feyerabend is the most controversial and extreme figure contributing to the debates discussed in this book. I called him "the" wild man, even though there have been various other wild men -and wild women - in the field besides Feyerabend. But Feyerabend's voice in the debates was uniquely wild. He argued for "epistemological anarchism," a view in which rules of method and normal scientific behavior were to be replaced by a freewheeling attitude in which "anything goes."

Lakatos's reaction to Kuhn's work was one of dismay. He saw Kuhn's influence as destructive-destructive of reason and ultimately dangerous to society. For Lakatos, Kuhn had presented scientific change as a fundamentally irrational process, a matter of "mob psychology", a process where the loudest, most energetic, and most numerous voices would prevail regardless of reasons.

But Lakatos also saw the force of Kuhn's historical arguments. So his project was to rescue the rationality of science from the damage Kuhn had done.

Feyerabend swooped on this point (1975). For him it was the Achilles' heel in Lakatos's whole story. If Lakatos does not give us a rule for when a rational scientist should give up on one research program and switch to another, his account of rational theory choice is completely empty.

In an interesting book called Progress and Its Problems, Larry Laudan developed a view that is similar to Lakatos's in basic structure but which is far superior. Like Lakatos, Laudan thought that Kuhn had described science as an irrational process, as a process in which scientific decision-making is "basically a political and propagandistic affair". This reading of Kuhn (I say yet again) is inaccurate.

Laudan argued that there are two different kinds of attitudes to theories and research traditions found in science, acceptance and pursuit. Acceptance is close to belief; to accept something is to treat it as true. But pursuit is different. It involves deciding to work with an idea, and explore it, for reasons other than confidence that the idea is likely to be true. Crucially, it can be reasonable to pursue an idea that one definitely does not accept.

Laudan built the distinction between acceptance and pursuit into his account of rational decision-making in science. He was able to give some fairly sharp rules where Lakatos had not. For Laudan, it is always rational to pursue the research tradition that has the highest current rate of progress in problem-solving. But that does not mean one should accept the basic ideas of that research tradition. The acceptability of theories and ideas is measured by their present overall level of problem-solving power, not by the rate of change. We should accept (perhaps cautiously) the theories that have the highest level of problem-solving power.

Both Lakatos and Laudan were interested in the situation where a scientist is looking out over a range of research programs in a field and deciding which one to join. But here is a question that neither of them seemed to ask: does the answer depend on how many people are already working in a given research program?

Science might be better served by some kind of mechanism in which the field hedges its bets. That suggests a whole different question that might be addressed by the philosophy of science: what is the best distribution of workers across a range of research programs? There are two different ways of approaching this new question. One way is to look at individual choices. Does it make sense for me to work on research program 1 rather than research program 2, given the way people are already distributed across the two programs? Is research program 1 overcrowded? Perhaps Lakatos and Laudan thought this question was not relevant to their project because it seems to require introducing selfish goals into the picture. But we can also approach the issue another way. We can ask, Which distribution of people across rival research programs is best for science?

Feyerabend, like many key figures in this book, was born in Austria. He fought in the German infantry during World War II and was wounded. He switched from science to philosophy after the war and eventually made his way to the University of California at Berkeley, where he taught for most of his career.

So what were his notorious ideas? A two-word summary gets us started: anything goes. Feyerabend's most famous work was his 1975 book Against Method. Here he argued for "epistemological anarchism." The epistemological anarchist is opposed to all systems of rules and constraints in science. Great scientists are opportunistic and creative, willing to make use of any available technique for discovery and persuasion.

Before launching into this unruly menagerie of ideas, we need to keep in mind a warning that Feyerabend gave at the start of Against Method. He said that the reader should not interpret the arguments in the book as expressing Feyerabend's "deep convictions." Instead, they "merely show how easy it is to lead people by the nose in a rational way". The epistemological anarchist is like an "undercover agent" who uses reason in order to destabilize it. Again we are being told by an author not to trust what we are reading. It is hard to know what to make of this, but I think it is possible to sort through Feyerabend's claims and distinguish some that do represent his "deep convictions." Feyerabend's deepest conviction was that science is an aspect of human creativity. Scientific ideas and scientific change are to be assessed in those terms.

In his article for the Routledge Encyclopedia of Philosophy (1998), Michael Williams suggests that we think of Feyerabend as a late representative of an old skeptical tradition, represented by Sextus Empiricus and Montaigne, in which the skeptic "explores and counterposes all manner of competing ideas without regarding any as definitely established." This is a useful comparison, but it is only part of the story. To capture the other part, we might compare Feyerabend to Oscar Wilde, the nineteenth-century Irish playwright, novelist, and poet who was imprisoned in England for homosexual behavior. Wilde is someone who liked to express strange, paradoxical claims about knowledge and ideas ("I can believe anything so long as it's incredible").

This, I suggest, is close to Feyerabend's view; what is important in all intellectual work, including science, is the free development of creativity and imagination. Nothing should be allowed to interfere with this.

His paper "Consolations for the Specialist" (1970) shows him to be one of the most perceptive critics of Kuhn.

He saw Kuhn as encouraging the worst trends in twentieth-century science toward professionalization, narrow-mindedness, and exclusion of unorthodox ideas.

Science, for Feyerabend, has gone from being an ally of freedom to being an enemy. Scientists are turning into "human ants," entirely unable to think outside of their training

Science, for Feyerabend, is often a matter of challenging rather than following the lessons of observation.

Are there any principles of method, measures of confirmation, or summaries of the scientific strategy that do not fail the great test of the early seventeenth century? Look at the massiveness of the rethinking that Galileo urged, and the great weight of ordinary experience telling against him. Given these, would all traditional philosophical accounts of how science works, especially empiricist accounts, have instructed us to stick with the Aristotelians rather than take a bet on Galileo? This is the Feyerabendian argument that haunts philosophy of science.

However, Feyerabend massively overextends his argument, into a principle that cannot be defended: "Hence it is advisable to let one's inclinations go against reason in any circumstances, for science may profit from it". Feyerabend claims that because some principle or rule may go wrong, we should completely ignore it. The claim is obviously crazy.

The first rule Feyerabend called the "principle of tenacity." This principle tells us to hold onto attractive theories despite initial problems and allow them a chance to develop their potential. That is a start, but if everyone followed this rule, nothing would ever change. So Feyerabend adds a second principle, the "principle of proliferation." This principle tells us to make up new theories, propose new ideas.

What is missing in Feyerabend's picture is some rule or mechanism for the rejection and elimination of ideas. Feyerabend gives a recipe that, if it was followed, would lead to the accumulation of an ever-increasing range of scientific ideas being discussed in every field. Some ideas would probably become boring and might be dropped for that reason. But aside from that, there is no way for an idea to be taken off the table. So a question immediately becomes pressing: what are we supposed to do when we have to apply one of these theories to a practical problem? What do we do when the bridge has to be built? Which ideas should we use? Not the most "creative" ones, surely! Feyerabend never gave a satisfactory answer to this question.

In the last few years, for example, the government of Thabo Mbeki in South Africa has shown an interest in radical ideas about the causation of AIDS. According to these ideas, the virus identified by mainstream science as the cause of AIDS, HIV, is regarded as either relatively unimportant or altogether harmless. In reply to the storm of criticism that resulted, Mbeki has sometimes said that he is simply interested in an open-minded questioning of theories and the exploration of diverse possibilities. Surely that is a properly scientific attitude? This reply has been rightly criticized as disingenuous. Science needs the invention of alternatives, but it also needs mechanisms for pruning the range of options and abandoning some. When the time comes to apply scientific ideas in a public health context, this selection process is of paramount importance. Then we must take from science the well-supported view that AIDS is caused by a virus transmitted through body fluids, and we must guide policy and behavior with this view.

How might we decide between a one-process view and a two-process view? Within twentieth-century philosophy, many people were persuaded by Quine's holism. These arguments were based on very general considerations and not on the history of specific episodes in science. Quine's most powerful argument is usually seen to be his claim that there is no way to mark out the distinction between changes within and changes between frameworks in a way that is scientific and does not beg the question. Kuhn, however, had no problem distinguishing normal science from revolutionary change in actual scientific cases. He saw two processes as a clear fact of history.

In any case, the introduction and criticism of two-process views of conceptual change has been a recurring motif in the last hundred years of thinking about science and knowledge.

8 - The Challenge from Sociology of Science

Science is a social enterprise. It seems, then, that one field we should turn to in order to understand this fact is sociology, the general study of human social structures.
The "sociology of science" developed in the middle of the twentieth century. For a while it had little interaction with philosophy of science. The founder of the field, and the central figure for many years, was Robert Merton.

In the 1940s Merton isolated what he called the "norms" of science - a set of basic values that govern scientific communities. These norms are universalism, communism, disinterestedness, and organized skepticism.

(Merton sometimes added humility to his list of norms, but that one is less important.)

The four norms are one part of Merton's account of science. Merton added another big idea in a famous (and wonderfully readable) paper first presented in 1957. This is Merton's account of the reward system in science. Merton claimed that the basic currency for scientific reward is recognition, especially recognition for being the first person to come up with an idea. This, Merton claimed, is the only property right recognized in science. Once an idea is published, it becomes common scientific property, according to the norm of communism. In the best case, a scientist is rewarded by having the idea named after him, as we see in such cases as Darwinism, Planck's constant, and Boyle's law.

Merton argued that the reward system of science mostly functions to encourage original thinking, which is a good thing. But the machine can also misfire, especially when the desire for reward overcomes everything else in a scientist's mind. The main "deviant" behaviors that result are fraud, plagiarism, and libel and slander.

Merton also has a poignant discussion of the fact that the kind of recognition that is the basic reward in science will only be given to a small number of scientists. There are not enough laws and constants for everyone to get one. The result is mild forms of deviancy such as the mania to publish. For pedestrian workers who cannot hope to produce a world-shaking discovery, publication becomes a substitute for real recognition.

Scientists are people who work in an unusual kind of local community. This community is characterized by high prestige, lengthy training and initiation, notoriously bad fashion choices, and expensive toys. But according to the sociologists, it is still a community in which beliefs are established and defended via local norms that are human creations, maintained by social interaction. Scientists often look down on beliefs found in other communities, but this disparaging attitude is part of the local norms of the scientific community. It is one of the rules of the game.

Although Kuhn's work is always cited by those seeking to tie science to its broader political context, Structure did not have much to say about the influence of "external" political life on science. Kuhn analyzed the "internal" politics of science - who writes the textbooks, who determines which problems have high priority. But he saw an insulation of scientific decision making from broader political influences as a strength of science. Despite his status as a hero, Kuhn did not like the more radical sociology of science that followed him.

So, despite some differences within the field, it is fair to say that the strong program is an expression of a relativist position about belief and justification.

A famous problem for relativists is the application of relativism to itself. The problem does have various solutions, but it can definitely lead to tangles. Unfortunately, that is what happened in sociology of science.

This section will look at the two most famous works in recent sociology of science.
The first is a piece of sociologically informed history, rather than pure sociology: Steven Shapin and Simon Schaffer's Leviathan and the Air Pump (1985, 1 will abbreviate the book as Leviathan). This book does not advocare the strong program, but it is often seen as a sophisticated development of those ideas. The book is so widely respected, in fact, that various different camps tend to claim it as their own. The second work is more controversial; it was important in a shift that took place in sociology of science: Bruno Latour and Stephen Woolgar's Laboratory Life (1979). This book appeared before Leviathan; it is famous as a pioneering work in its style.

Boyle and his allies developed a new picture of what should be the subject of organized investigation and dispute, and how these disputes should be settled. The Royal Society of London, founded in 1660 by Boyle's group, became the institutional embodiment of the new approach.

Latour saw this processing as aimed at taking scientific claims and building structures of "support" around them, so they would eventually be taken as facts. A key step in this process is hiding the human work involved in turning something into a fact; to turn something into a fact is to make it look like it is not a human product but is given directly by nature.

In Latour's view, when we explain why one side succeeded and another failed in a scientific controversy, we should never give the explanation in terms of nature itself. Both sides will be claiming that they are the ones in tune with the facts. But when one side wins, that side's version of "the facts" becomes immune to challenge. Latour describes this final step as a process in which facts are created, or constructed, by scientific work.

In both its radical work and its more cautious work, sociology of science in the latter part of the twentieth century tended to suggest an unusual picture of science. This is a picture in which science is controlled entirely by human collective choices and social interests. What makes science run is negotiation, conflict resolution, hierarchies, power inequalities.... There seems to be no place in the picture for the responsiveness of scientific belief to the real structure of the world being investigated.

9 - Feminism and Science Studies

Many thought that by showing the connections between scientific institutions and political power, it would become clear that "science is political," rather than being an institution outside of politics that enjoys a special authority derived from this political neutrality. Revealing the political embedding of science would also have relevance to questions about education, medicine, and a variety of other crucial areas of social policy. The most important manifestation of this new attitude is found in the development of feminist critiques of science and feminist philosophies of science.

This shift in thinking within primatology coincided, at least roughly, with an influx of women into the field. Primatology is, in fact, one of the scientific fields in which the presence of women is unusually strong. What role did the presence of women have in changing opinions within the field? According to Hrdy (and according to others I have spoken to), the idea that this increasing representation of women had a significant role in shifting people's views about female primate behavior is fairly routinely accepted within primatology. Hrdy adds that this view seems to be accepted more in the United States than in Britain. Hrdy herself is rather cautious about this issue, but she suggests that women researchers, like herself, did tend to empathize with female primates and watched the details of their behavior more closely than their male colleagues had.

Most ambitiously, some feminist epistemologists have argued that even our fundamental concepts of reason, evidence, and truth are covertly sexist.

Standpoint theory holds that there are some facts that are only visible from a special point of view, the point of view of people who have been oppressed or "marginalized" by society. Those at the margins, or the bottom of the heap, will be able to criticize the basics - both in scientific fields and in political discussion - in a way that others cannot. Science will benefit from taking more seriously the ideas developed by people with this special point of view. This is not a relativist position because the marginalized are seen as really having better access to crucial facts than other people have.

Longino calls this revised view "contextual empiricism." This is a form of empiricism that emphasizes the role of social interaction. Longino argues that in order to be able to distinguish rationality from irrationality we should take the social group as our basic unit. Science is rational to the extent that it chooses theories from a diverse pool of options reflecting different points of view, and makes its choice via a critical dialogue that reaches consensus without coercion. Diversity in the ideas in the pool is facilitated by diversity in the backgrounds of those participating in the discussion. Epistemology becomes a field that tries to distinguish good community-level procedures from bad ones. If this is the right way to incorporate feminist ideas into epistemology, it is a way that follows a fairly old tradition (as Longino would not deny). Paul Feyerabend, as we saw in chapter 7, argued for the importance of maintaining diversity in scientific communities. And as Elisabeth Lloyd argues, Feyerabend was extending and radicalizing a line of argument from John Stuart Mill (Lloyd 1997). Diversity, for Mill, provides the raw materials for social and intellectual progress, via a vigorous "marketplace of ideas."

The role of gender in the mix is a separate question, as writers like Longino accept. Is it really true that men and women in modern Western societies have different perspectives of a kind that is relevant to science? Feminists accept that other differences, especially class differences and ethnic differences, may have as much of an effect as do gender differences, or even more than that. But many feminists expect there to be some definite "patterning" in the great soup of intellectual diversity that is due to gender differences.

It is a much harder question whether or not the experience and viewpoint of women is systematically different from that of men in a way that is likely to matter to scientific disputes. There is a risk of lapsing into simplistic generalizations here.

One of the main themes in this chapter and the previous one has been the constant expansion of the range of fields seeking to contribute to a general understanding of science.

The resulting clash became known as the "Science Wars." Science Studies, and other work covered in this chapter, became a key battle ground. Some of the attacks on this work came from the side of conservatism in political and social thought. Advocates of "traditional" education, both in schools and in universities, worried that transmission of the treasures and values of Western civilization was being undermined by radical leftist faculty members in universities and soft-minded administrators in schools. The humanities had gone to hell, and now they were trying to wreck science as well, via endless relativist bleating that science is "just another approach to knowledge with no special status."

In 1994 an American physicist, Alan Sokal, submitted a paper to a literary-political journal called Social Text, which was doing a special issue on science. The paper was a parody of radical work in Science Studies; it used the jargon of postmodernism to discuss progressive political possibilities implicit in recent mathematical physics. The title of the paper gives a sense of the style: "Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity." The argument of the paper was completely ridiculous and often quite funny. The aim was to see if the paper would be accepted and printed by the journal; Sokal believed that this would show the field had lost all intellectual standards and would print anything that used the right buzzwords and expressed the appropriate political sentiments. Social Text published the paper (Sokal 1996b), and Sokal revealed his hoax in the journal Lingua Franca (1996a), an irreverent journal of academic life (sadly, defunct, at least for now). The uproar reverberated across the academic world and also made the newspapers.

Science Studies is rather hostile toward the idea of explaining patterns in scientific change in terms of relations between scientific theories and the structure of the world.

10 - Naturalistic Philosophy in Theory and Practice

Naturalism is often summarized by saying that "philosophy should be continuous with science." This slogan sounds nice, but it is hard to work out what it really means.

The birth of modern naturalism is often said to be the publication of W. V. Quine's paper "Epistemology Naturalized" (1969). Certainly Quine's work is very important here, but we should not think of modern naturalism as coming
entirely out of Quine. The American philosopher John Dewey is usually thought of as a pragmatist, but during the later part of his career (from roughly 1925 onward) his philosophy was a form of naturalism.

In this section I will focus on a debate that developed in the 1960's and continues to the present. The debate concerns the role of observation in science, and it is often called the debate about the "theory-ladenness of observation." Put most simply, the debate has to do with whether observational evidence can be considered an unbiased or neutral source of information when choosing between theories, or whether observations tend to be "contaminated" by theoretical assumptions in a way that prevents them from having this role.

11 - Naturalism and the Social Structure of Science

Is science a fundamentally cooperative enterprise, or is it a fundamentally competitive one in which scientists are out for personal advancement? According to Hull (and also Merton), science runs on a combination of cooperation and competition. Neither is fundamental, and the special features of science are due to an interaction between the two. This interaction arises from the reward system found in science and the context in which the reward system operates.

For Hull, being used and cited matters more than anything else.

Hull also argues that the reason why fraud in science is so much more serious a crime than theft, even in cases where public well-being is not affected, has to do with these sorts of factors. In a case of theft or plagiarism, the only person harmed is the one stolen from. But when a case of fraud is discovered, all the scientists who used the fraudulent work will find their work on that topic deemed unreliable, and their work will not be used.

The Royal Society of London, under its skillful first secretary, Henry Oldenburg, used rapid publication in the Proceedings to allocate credit and to encourage people to share their ideas. Oldenburg's system, which also included anonymous refereeing of papers, is basically what has come down to us today.

So here is a third option: we reward only the individuals who work on the research program that succeeds, but we divide the "pie" equally between all the workers who chose that program. So the reward that an individual gets will depend not just on their own choice but on how many other individuals chose the same program. This third reward system, Kircher argues, will produce a good distribution of workers across the two options.

The power of science is seen in the cumulative and coordinated nature of scientific work; each generation in science builds on the work of workers who came before, and each generation organizes its energies via collaboration and public discussion.

The ideas of people like Merton, Kuhn, Hull, and Kircher might describe science from the seventeenth to the twentieth centuries, but change may well be in the air. Scientists have usually not hoped to become rich through their work; recognition, especially by their peers, has been an alternative form of reward. But a number of commentators have noted that big financial rewards have now started to become a far more visible feature of the life of the scientist, especially in areas like biotechnology. Kuhn warned that the insulation of science from pushes and pulls deriving from external political and economic life was a key source of science's strength. We do not know how fragile the social structure of science might be.

12 - Scientific Realism

Scientific Realism:

  1. Common-sense realism naturalized.
  2. One actual and reasonable aim of science is to give us accurate descriptions (and other representations) of what reality is like. This project includes giving us accurate representations of aspects of reality that are unobservable.

What level of confidence should we have in our current theories, given the dramatic history of change in science? We should not think that this question is one to be settled solely by the historical track record. We might have reason to believe that our methods of hypothesizing and testing theories have improved over the years. But history will certainly give us interesting data on the question. We might find good reason to have different levels of confidence, and also different kinds of confidence, in different domains of science.

Many hypotheses in science are expressed using models. Consider the case of mathematical models. These are abstract mathematical structures that are supposed to represent key features of real systems in the world. But in thinking about how a mathematical model might succeed in representing the world, the linguistic concepts of truth, falsity, reference, and so forth do not seem to be useful. Models have a different kind of representational relationship with the world from that found in language. A good model is one that has some kind of similarity relationship, probably of an abstract kind, with the system that the model is "targeted" at (Giere 1988). It is hard to work out the details of this idea.

13 - Explanation

Empiricist philosophers, I said above, have sometimes been distrustful of the idea that science explains things. Logical positivism is an example. The idea of explanation was sometimes associated by the positivists with the idea of achieving deep metaphysical insight into the world - an idea they would have nothing to do with. But the logical positivists and logical empiricists did make peace with the idea that science explains. They did this by construing "explanation" in a low-key way that fitted into their empiricist picture. The result was the covering law theory of explanation. This was the dominant philosophical theory about scientific explanation for a good part of the twentieth century. The view is now dead, but its rise and fall are instructive.

An equally good argument, logically speaking, can be run in both directions; either can give information about the other. But it seems that we cannot run an equally good explanation in both directions, though the covering law theory says we can. It is fine to explain the length of the shadow in terms of the flagpole and the sun, but it is not fine to explain the length of the flagpole in terms of the shadow and the sun.

We confidently used the idea of causation to resolve the flagpole case, but the whole idea of causation and causal connection is extremely controversial in philosophy. For many philosophers, causation is a suspicious metaphysical concept that we do best to avoid when trying to understand science. This suspicion is, again, common within the empiricist tradition. It derives from the work of Hume. The suspicion is directed especially at the idea of causation as a sort of hidden connection between things, unobservable but essential to the operation of the universe.

Science constantly strives to reduce the number of things that we must accept as fundamental. We try to develop general explanatory schemata (explanatory schemes) that can be applied as widely as possible.

The standards for a good explanation in field A need not suffice in field B.

It is Kuhn's view that the idea of explanation will evolve as our ideas about science and about the universe change.

He denies, as I do, that explanation is some single, special relation common to all of science.

Causation is sometimes called, half jokingly, "the cement of the universe".

In 1983 Nancy Cartwright delivered a wake-up call to the field with a book called How the Laws of Physics Lie, in which she argued that what people call "laws of physics" do not usually describe the behavior of real systems at all, but only describe the behavior of highly idealized fictional systems.

14 - Bayesianism and Modern Theories of Evidence

If someone had come up with a really convincing theory of confirmation, it would have been harder to argue for radical views of the kind discussed in chapters 7-9. The absence of such a theory put empiricist philosophers on the defensive. The situation has now changed. Once again a large number of philosophers have real hope in a theory of confirmation and evidence. The new view is called Bayesianism.

Those are two central ideas in Bayesianism: the idea that e confirms h if e raises the probability of h, and the idea that probabilities should be updated in a way dictated by Bayes's theorem.

So although it would be good to use Bayes's theorem to discuss evidence, many interpretations of probability will not allow this because they cannot make sense of prior probabilities of theories. If we want to use Bayes's theorem, we need an interpretation of probability that will allow us to talk about prior probabilities. And that is what Bayesians have developed. This interpretation of probability is called the subjectivist interpretation.

The subjectivist approach to probability was pioneered (independently) by two philosopher-mathematicians, Frank Ramsey and Bruno de Finetti, in the 1920s and 1930s. This interpretation of probability is not only important in philosophy; it is central to decision theory, which has great importance in the social sciences (especially economics).

If we know a person's subjectively fair odds for a bet, we can read off his degree of belief in the proposition that the bet is about.

Why should your degrees of belief follow these rules? Subjectivists argue for this with a famous form of argument called a "Dutch book." (My apologies to any readers who are Dutch.)
The argument is as follows: if your degrees of belief do not conform to the principles of the probability calculus, there are possible gambling situations in which you are guaranteed to lose money, no matter how things turn out.

So "today's posteriors are tomorrow's priors."

The convergence proofs assume that when two people start with very different priors, they nonetheless agree about all their likelihoods (probabilities of the form P(e|h), etc.). That is needed for disagreement about the priors to "wash out." But why should we expect this agreement about likelihoods? Why should two people who disagree massively on many things have the same likelihoods for all
possible evidence? Why don't their disagreements affect their views on the relevance of possible observations? This agreement might be present, but there is no general reason why it should be.

Eliminative inference is, of course, the kind of reasoning associated with the famous fictional detective Sherlock Holmes.

Given two possible explanations for the data, scientists often prefer the simpler one. Despite various elaborate attempts, I do not think we have made much progress on understanding the operation of, or justification for, this preference.

The role of procedures is fundamental; an observation is only evidence if it is embedded in the right kind of procedure. I think this is a very general fact about evidence and confirmation; Hempel was wrong to think that generalizations are always confirmed by observations of their instances. There is only confirmation (or support) if the underlying procedure was of the right kind.

15 - Empiricism, Naturalism, and Scientific Realism?

In particular, I will connect three ideas: empiricism, naturalism, and scientific realism.

Empiricism traditionally holds that our source of knowledge about the world is experience. Naturalism holds that we can only hope to resolve philosophical problems (including epistemological problems) by approaching them within a scientific picture of ourselves and our place in the universe. Scientific realism holds that science can reasonably aim to describe the real structure of the world, including its unobservable structure.

In recent years the tension between scientific realism and empiricism has often been debated under the heading "the underdetermination of theory by evidence."

So how should we describe the role of experience? The right way to proceed is to cast empiricism within a naturalistic approach to philosophy. My version of this approach is influenced by the early-twentieth-century naturalism of John Dewey (1929).

At the end of chapter 10, I said that we might think of science as a something like a strategy. In this sense science is the strategy of subjecting even the biggest theoretical ideas, questions, and disputes to testing by means of observation. This strategy is not dictated to us by the nature of human language, the fundamental rules of thought, or our biology; it is more like a choice. The choice can be made by an individual or by a culture. The scientific strategy is to construe ideas, to embed them in surrounding frameworks, and to develop them, in such a way that exposure to experience is sought even in the case of the most general and ambitious hypotheses about the universe. That view of science is a kind of empiricism.

Let us distinguish the general scientific strategy from a particular way of organizing how the strategy is carried out. The strategy itself is the attempt to assess big ideas by exposing them to experience. In a broad sense, that is what science is all about. But the Scientific Revolution and the work that followed it also developed a particular, socially organized way of carrying out the strategy. The term "science" can also be used, more narrowly, to refer to that social organization.

The crucial feature we find along this dimension is that scientific work is cumulative. Each generation builds on the work of predecessors; current workers "stand on the shoulders" of earlier workers, as Isaac Newton once put it. This requires both trustworthy ways of transmitting ideas across time and (again) a reward system that makes it worthwhile to carry on where earlier workers left off.

For several of the figures discussed in this book, the way that the empiricist strategy has been socially organized by modern science exhibits a remarkable balance. Or, more accurately, we seem to find a couple of different balances. One is a balance between competition and cooperation; this is, in a sense, the message of the work by Merton, Hull, and Kircher discussed in chapters 8 and 11. The other is a balance between criticism and trust.

We can see Kuhn as arguing that science cannot be described by any kind of simple empiricist formula, because science is a much more complicated machine than traditional empiricism ever imagined. Empiricist ideas are not just vague and incomplete; they get it wrong. Empiricist views have no resources to describe the complex balances found in scientific work, especially balances found in the social organization of science.

The critic of empiricism suspects that people like me want to hang onto empiricist ideas because they are pleasingly simple and often rhetorically useful. What makes science different from attempts to understand the world based on religious fundamentalism? When questions like this are asked, the empiricist seems able to give a simple and satisfying answer. "Science is different because it is a process in which beliefs are shaped by observation. Ideas are assessed not in terms of their origins, but in terms of how they stand up to testing. Science is open-minded, anti-authoritarian, and flexible." Nice and simple. Now suppose that these traditional empiricist ideas are replaced by a much more complex story, a story about delicate balances, a special reward system, moves within and between frameworks.... The defender of a more complex story might still insist that science is a superior approach to investigation. But the features that make science different will not be obvious, simple features, as they are according to the empiricist story. Simplicity is often attractive, but simple answers are often false.

What are the key issues for philosophy of science in the near future?

The second has to do with the reward system in science, and the relations between individual-level and community-level goals. So far the philosophical treatments of this topic have tended to generalize a lot, and it has been assumed that scientists have all internalized a similar set of motivations. Using input from sociology of science, it should be possible to tell a much more detailed story. What differences are there between different fields and different subcultures in science, for example? The relation between competition and cooperation in science is a fascinating topic.