LOGIC
Philosophy of logic, the study, from a philosophical perspective, of the nature and types of logic, including problems in the field and the relation of logic to mathematics and other disciplines. The term logic comes from the Greek word logos. The variety of senses that logos possesses may suggest the difficulties to be encountered in characterizing the nature and scope of logic. Among the partial translations of logos, there are “sentence,” “discourse,” “reason,” “rule,” “ratio,” “account” (especially the account of the meaning of an expression), “rational principle,” and “definition.” Not unlike this proliferation of meanings, the subject matter of logic has been said to be the “laws of thought,” “the rules of right reasoning,” “the principles of valid argumentation,” “the use of certain words labelled ‘logical constants’,” “truths (true propositions) based solely on the meanings of the terms they contain,” and so on.
LOGIC AS A DISCIPLINE
Nature and varieties of logic It is relatively easy to discern some order in the above embarrassment of explanations. Some of the characterizations are in fact closely related to each other. When logic is said, for instance, to be the study of the laws of thought, these laws cannot be the empirical (or observable) regularities of actual human thinking as studied in psychology; they must be laws of correct reasoning, which are independent of the psychological idiosyncrasies of the thinker. Moreover, there is a parallelism between correct thinking and valid argumentation: valid argumentation may be thought of as an expression of correct thinking, and the latter as an internalization of the former. In the sense of this parallelism, laws of correct thought will match those of correct argumentation. The characteristic mark of the latter is, in turn, that they do not depend on any particular matters of fact. Whenever an argument that takes a reasoner from p to q is valid, it must hold independently of what he happens to know or believe about the subject matter of p and q. The only other source of the certainty of the connection between p and q, however, is presumably constituted by the meanings of the terms that the propositions p and q contain. These very same meanings will then also make the sentence “If p, then q” true irrespective of all contingent matters of fact. More generally, one can validly argue from p to q if and only if the implication “If p, then q” is logically true—i.e., true in virtue of the meanings of words occurring in p and q, independently of any matter of fact. Logic may thus be characterized as the study of truths based completely on the meanings of the terms they contain. In order to accommodate certain traditional ideas within the scope of this formulation, the meanings in question may have to be understood as embodying insights into the essences of the entities denoted by the terms, not merely codifications of customary linguistic usage. The following proposition (from Aristotle), for instance, is a simple truth of logic: “If sight is perception, the objects of sight are objects of perception.” Its truth can be grasped without holding any opinions as to what, in fact, the relationship of sight to perception is. What is needed is merely an understanding of what is meant by such terms as “if–then,” “is,” and “are,” and an understanding that “object of” expresses some sort of relation. The logical truth of Aristotle’s sample proposition is reflected by the fact that “The objects of sight are objects of perception” can validly be inferred from “Sight is perception.” Many questions nevertheless remain unanswered by this characterization. The contrast between matters of fact and relations between meanings that was relied on in the characterization has been challenged, together with the very notion of meaning. Even if both are accepted, there remains a considerable tension between a wider and a narrower conception of logic. According to the wider interpretation, all truths depending only on meanings belong to logic. It is in this sense that the word logic is to be taken in such designations as “epistemic logic” (logic of knowledge), “doxastic logic” (logic of belief), “deontic logic” (logic of norms), “the logic of science,” “inductive logic,” and so on. According to the narrower conception, logical truths obtain (or hold) in virtue of certain specific terms, often called logical constants. Whether they can be given an intrinsic characterization or whether they can be specified only by enumeration is a moot point. It is generally agreed, however, that they include (1) such propositional connectives as “not,” “and,” “or,” and “if–then” and (2) the so-called quantifiers “(“x)” (which may be read: “For at least one individual, call it x, it is true that”) and “($x)” (“For each individual, call it x, it is true that”). The dummy letter x is here called a bound (individual) variable. Its values are supposed to be members of some fixed class of entities, called individuals, a class that is variously known as the universe of discourse, the universe presupposed in an interpretation, or the domain of individuals. Its members are said to be quantified over in “(“x)” or “($x).” Furthermore, (3) the concept of identity (expressed by =) and (4) some notion of predication (an individual’s having a property or a relation’s holding between several individuals) belong to logic. The forms that the study of these logical constants take are described in greater detail in the article logic, in which the different kinds of logical notation are also explained. Here, only a delineation of the field of logic is given. Features and problems of logic Three areas of general concern are the following. Logical semantics For the purpose of clarifying logical truth and hence the concept of logic itself, a tool that has turned out to be more important than the idea of logical form is logical semantics, sometimes also known as model theory. By this is meant a study of the relationships of linguistic expressions to those structures in which they may be interpreted and of which they can then convey information. The crucial idea in this theory is that of truth (absolutely or with respect to an interpretation). It was first analyzed in logical semantics around 1930 by the Polish-American logician Alfred Tarski. In its different variants, logical semantics is the central area in the philosophy of logic. It enables the logician to characterize the notion of logical truth irrespective of the supply of nonlogical constants that happen to be available to be substituted for variables, although this supply had to be used in the characterization that turned on the idea of logical form. It also enables him to identify logically true sentences with those that are true in every interpretation (in “every possible world”). The ideas on which logical semantics is based are not unproblematic, however. For one thing, a semantical approach presupposes that the language in question can be viewed “from the outside”; i.e., considered as a calculus that can be variously interpreted and not as the all-encompassing medium in which all communication takes place (logic as calculus versus logic as language). Furthermore, in most of the usual logical semantics the very relations that connect language with reality are left unanalyzed and static. Ludwig Wittgenstein, an Austrian-born philosopher, discussed informally the “language-games”—or rule-governed activities connecting a language with the world—that are supposed to give the expressions of language their meanings; but these games have scarcely been related to any systematic logical theory. Only a few other attempts to study the dynamics of the representative relationships between language and reality have been made. The simplest of these suggestions is perhaps that the semantics of first-order logic should be considered in terms of certain games (in the precise sense of game theory) that are, roughly speaking, attempts to verify a given first-order sentence. The truth of the sentence would then mean the existence of a winning strategy in such a game. Limitations of logic Many philosophers are distinctly uneasy about the wider sense of logic. Some of their apprehensions, voiced with special eloquence by a contemporary Harvard University logician, Willard Van Quine, are based on the claim that relations of synonymy cannot be fully determined by empirical means. Other apprehensions have to do with the fact that most extensions of first-order logic do not admit of a complete axiomatization; i.e., their truths cannot all be derived from any finite—or recursive (see below)—set of axioms. This fact was shown by the important “incompleteness” theorems proved in 1931 by Kurt Gödel, an Austrian (later, American) logician, and their various consequences and extensions. (Gödel showed that any consistent axiomatic theory that comprises a certain amount of elementary arithmetic is incapable of being completely axiomatized.) Higher-order logics are in this sense incomplete and so are all reasonably powerful systems of set theory. Although a semantical theory can be built for them, they can scarcely be characterized any longer as giving actual rules—in any case complete rules—for right reasoning or for valid argumentation. Because of this shortcoming, several traditional definitions of logic seem to be inapplicable to these parts of logical studies. These apprehensions do not arise in the case of modal logic, which may be defined, in the narrow sense, as the study of logical necessity and possibility; for even quantified modal logic admits of a complete axiomatization. Other, related problems nevertheless arise in this area. It is tempting to try to interpret such a notion as logical necessity as a syntactical predicate; i.e., as a predicate the applicability of which depends only on the form of the sentence claimed to be necessary—rather like the applicability of formal rules of proof. It has been shown, however, by Richard Montague, an American logician, that this cannot be done for the usual systems of modal logic. Logic and computability These findings of Gödel and Montague are closely related to the general study of computability, which is usually known as recursive function theory (see mathematics, foundations of: The crisis in foundations following 1900: Logicism, formalism, and the metamathematical method) and which is one of the most important branches of contemporary logic. In this part of logic, functions—or laws governing numerical or other precise one-to-one or many-to-one relationships—are studied with regard to the possibility of their being computed; i.e., of being effectively—or mechanically—calculable. Functions that can be so calculated are called recursive. Several different and historically independent attempts have been made to define the class of all recursive functions, and these have turned out to coincide with each other. The claim that recursive functions exhaust the class of all functions that are effectively calculable (in some intuitive informal sense) is known as Church’s thesis (named after the American logician Alonzo Church). One of the definitions of recursive functions is that they are computable by a kind of idealized automaton known as a Turing machine (named after Alan Mathison Turing, a British mathematician and logician). Recursive function theory may therefore be considered a theory of these idealized automata. The main idealization involved (as compared with actually realizable computers) is the availability of a potentially infinite tape. The theory of computability prompts many philosophical questions, most of which have not so far been answered satisfactorily. It poses the question, for example, of the extent to which all thinking can be carried out mechanically. Since it quickly turns out that many functions employed in mathematics—including many in elementary number theory—are nonrecursive, one may wonder whether it follows that a mathematician’s mind in thinking of such functions cannot be a mechanism and whether the possibly nonmechanical character of mathematical thinking may have consequences for the problems of determinism and free will. Further work is needed before definitive answers can be given to these important questions.
EXISTENTIALISM
Existentialism in the broader sense is a 20th century philosophy that is centered upon the analysis of existence and of the way humans find themselves existing in the world. The notion is that humans exist first and then each individual spends a lifetime changing their essence or nature. In simpler terms, existentialism is a philosophy concerned with finding self and the meaning of life through free will, choice, and personal responsibility. The belief is that people are searching to find out who and what they are throughout life as they make choices based on their experiences, beliefs, and outlook. And personal choices become unique without the necessity of an objective form of truth. An existentialist believes that a person should be forced to choose and be responsible without the help of laws, ethnic rules, or traditions. Existentialism – What It Is and Isn’t Existentialism takes into consideration the underlying concepts: Human free will Human nature is chosen through life choices A person is best when struggling against their individual nature, fighting for life Decisions are not without stress and consequences There are things that are not rational Personal responsibility and discipline is crucial Society is unnatural and its traditional religious and secular rules are arbitrary Worldly desire is futile Existentialism is broadly defined in a variety of concepts and there can be no one answer as to what it is, yet it does not support any of the following: wealth, pleasure, or honor make the good life social values and structure control the individual accept what is and that is enough in life science can and will make everything better people are basically good but ruined by society or external forces “I want my way, now!” or “It is not my fault!” mentality There is a wide variety of philosophical, religious, and political ideologies that make up existentialism so there is no universal agreement in an arbitrary set of ideals and beliefs. Politics vary, but each seeks the most individual freedom for people within a society.
Existentialism – Impact on Society Existentialistic ideas came out of a time in society when there was a deep sense of despair following the Great Depression and World War II. There was a spirit of optimism in society that was destroyed by World War I and its mid-century calamities. This despair has been articulated by existentialist philosophers well into the 1970s and continues on to this day as a popular way of thinking and reasoning (with the freedom to choose one’s preferred moral belief system and lifestyle). An existentialist could either be a religious moralist, agnostic relativist, or an amoral atheist. Kierkegaard, a religious philosopher, Nietzsche, an anti-Christian, Sartre, an atheist, and Camus an atheist, are credited for their works and writings about existentialism. Sartre is noted for bringing the most international attention to existentialism in the 20th century. Each basically agrees that human life is in no way complete and fully satisfying because of suffering and losses that occur when considering the lack of perfection, power, and control one has over their life. Even though they do agree that life is not optimally satisfying, it nonetheless has meaning. Existentialism is the search and journey for true self and true personal meaning in life. Most importantly, it is the arbitrary act that existentialism finds most objectionable-that is, when someone or society tries to impose or demand that their beliefs, values, or rules be faithfully accepted and obeyed. Existentialists believe this destroys individualism and makes a person become whatever the people in power desire thus they are dehumanized and reduced to being an object. Existentialism then stresses that a person’s judgment is the determining factor for what is to be believed rather than by arbitrary religious or secular world values.
ANALYTIC TRADITION
Analytic philosophy, also called linguistic philosophy, a loosely related set of approaches to philosophical problems, dominant in Anglo-American philosophy from the early 20th century, that emphasizes the study of language and the logical analysis of concepts. Although most work in analytic philosophy has been done in Great Britain and the United States, significant contributions also have been made in other countries, notably Australia, New Zealand, and the countries of Scandinavia.
NATURE OF ANALYTIC PHILOSOPHY
Analytic philosophers conduct conceptual investigations that characteristically, though not invariably, involve studies of the language in which the concepts in question are, or can be, expressed. According to one tradition in analytic philosophy (sometimes referred to as formalism), for example, the definition of a concept can be determined by uncovering the underlying logical structures, or “logical forms,” of the sentences used to express it. A perspicuous representation of these structures in the language of modern symbolic logic, so the formalists thought, would make clear the logically permissible inferences to and from such sentences and thereby establish the logical boundaries of the concept under study. Another tradition, sometimes referred to as informalism, similarly turned to the sentences in which the concept was expressed but instead emphasized their diverse uses in ordinary language and everyday situations, the idea being to elucidate the concept by noting how its various features are reflected in how people actually talk and act. Even among analytic philosophers whose approaches were not essentially either formalist or informalist, philosophical problems were often conceived of as problems about the nature of language. An influential debate in analytic ethics, for example, concerned the question of whether sentences that express moral judgments (e.g., “It is wrong to tell a lie”) are descriptions of some feature of the world, in which case the sentences can be true or false, or are merely expressions of the subject’s feelings—comparable to shouts of “Bravo!” or “Boo!”—in which case they have no truth-value at all. Thus, in this debate the philosophical problem of the nature of right and wrong was treated as a problem about the logical or grammatical status of moral statements. The empiricist tradition In spirit, style, and focus, analytic philosophy has strong ties to the tradition of empiricism, which has characterized philosophy in Britain for some centuries, distinguishing it from the rationalism of Continental European philosophy. In fact, the beginning of modern analytic philosophy is usually dated from the time when two of its major figures, Bertrand Russell (1872–1970) and G.E. Moore (1873–1958), rebelled against an anti empiricist idealism that had temporarily captured the English philosophical scene. The most renowned of the British empiricists—John Locke, George Berkeley, David Hume, and John Stuart Mill—have many interests and methods in common with contemporary analytic philosophers. And although analytic philosophers have attacked some of the empiricists’ particular doctrines, one feels that this is the result more of a common interest in certain problems than of any difference in general philosophical outlook. Most empiricists, though admitting that the senses fail to yield the certainty requisite for knowledge, hold nonetheless that it is only through observation and experimentation that justified beliefs about the world can be gained—in other words, a priori reasoning from self-evident premises cannot reveal how the world is. Accordingly, many empiricists insist on a sharp dichotomy between the physical sciences, which ultimately must verify their theories by observation, and the deductive or a priori sciences—e.g., mathematics and logic—the method of which is the deduction of theorems from axioms. The deductive sciences, in the empiricists’ view, cannot produce justified beliefs, much less knowledge, about the world. This conclusion was a cornerstone of two important early movements in analytic philosophy, logical atomism and logical positivism. In the positivist’s view, for example, the theorems of mathematics do not represent genuine knowledge of a world of mathematical objects but instead are merely the result of working out the consequences of the conventions that govern the use of mathematical symbols. The question then arises whether philosophy itself is to be assimilated to the empirical or to the a priori sciences. Early empiricists assimilated it to the empirical sciences. Moreover, they were less self-reflective about the methods of philosophy than are contemporary analytic philosophers. Preoccupied with epistemology (the theory of knowledge) and the philosophy of mind, and holding that fundamental facts can be learned about these subjects from individual introspection, early empiricists took their work to be a kind of introspective psychology. Analytic philosophers in the 20th century, on the other hand, were less inclined to appeal ultimately to direct introspection. More important, the development of modern symbolic logic seemed to promise help in solving philosophical problems—and logic is as a priori as science can be. It seemed, then, that philosophy must be classified with mathematics and logic. The exact nature and proper methodology of philosophy, however, remained in dispute. The role of symbolic logic For philosophers oriented toward formalism, the advent of modern symbolic logic in the late 19th century was a watershed in the history of philosophy, because it added greatly to the class of statements and inferences that could be represented in formal (i.e., axiomatic) languages. The formal representation of these statements provided insight into their underlying logical structures; at the same time, it helped to dispel certain philosophical puzzles that had been created, in the view of the formalists, through the tendency of earlier philosophers to mistake surface grammatical form for logical form. Because of the similarity of sentences such as “Tigers bite” and “Tigers exist,” for example, the verb to exist may seem to function, as other verbs do, to predicate something of the subject. It may seem, then, that existence is a property of tigers, just as their biting is. In symbolic logic, however, existence is not a property; it is a higher-order function that takes so-called “propositional functions” as values. Thus, when the propositional function “Tx”—in which T stands for the predicate “…is a tiger” and x is a variable replaceable with a name—is written beside a symbol known as the existential quantifier—∃x, meaning “There exists at least one x such that…”—the result is a sentence that means “There exists at least one x such that x is a tiger.” The fact that existence is not a property in symbolic logic has had important philosophical consequences, one of which has been to show that the ontological argument for the existence of God, which has puzzled philosophers since its invention in the 11th century by St. Anselm of Canterbury, is unsound. Among 19th-century figures who contributed to the development of symbolic logic were the mathematicians George Boole (1815–64), the inventor of Boolean algebra, and Georg Cantor (1845–1918), the creator of set theory. The generally recognized founder of modern symbolic logic is Gottlob Frege (1848–1925), of the University of Jena in Germany. Frege, whose work was not fully appreciated until the mid-20th century, is historically important principally for his influence on Russell, whose program of logicism (the doctrine that the whole of mathematics can be derived from the principles of logic) had been attempted independently by Frege some 25 years before the publication of Russell’s principal logicist works, Principles of Mathematics (1903) and Principia Mathematica (1910–13; written in collaboration with Russell’s colleague at the University of Cambridge Alfred North Whitehead).
PHENOMENOLOGY
Phenomenology, a philosophical movement originating in the 20th century, the primary objective of which is the direct investigation and description of phenomena as consciously experienced, without theories about their causal explanation and as free as possible from unexamined preconceptions and presuppositions. The word itself is much older, however, going back at least to the 18th century, when the Swiss German mathematician and philosopher Johann Heinrich Lambert applied it to that part of his theory of knowledge that distinguishes truth from illusion and error. In the 19th century the word became associated chiefly with the Phänomenologie des Geistes (1807; Phenomenology of Mind), by Georg Wilhelm Friedrich Hegel, who traced the development of the human spirit from mere sense experience to “absolute knowledge.” The so-called phenomenological movement did not get under way, however, until early in the 20th century. But even this new phenomenology included so many varieties that a comprehensive characterization of the subject requires their consideration.
CHARACTERISTICS OF PHENOMENOLOGY
In view of the spectrum of phenomenologies that have issued directly or indirectly from the original work of the German philosopher Edmund Husserl, it is not easy to find a common denominator for such a movement beyond its common source. But similar situations occur in other philosophical as well as nonphilosophical movements. Essential features and variations Although, as seen from Husserl’s last perspective, all departures from his own views could appear only as heresies, a more generous assessment will show that all those who consider themselves phenomenologists subscribe, for instance, to his watchword, zu den Sachen selbst (“to the things themselves”), by which they meant the taking of a fresh approach to concretely experienced phenomena—an approach as free as possible from conceptual presuppositions—and the attempt to describe them as faithfully as possible. Moreover, most adherents to phenomenology hold that it is possible to obtain insights into the essential structures and the essential relationships of these phenomena on the basis of a careful study of concrete examples supplied by experience or imagination and by a systematic variation of these examples in the imagination. Some phenomenologists also stress the need for studying the ways in which the phenomena appear in object-directed, or “intentional,” consciousness. Beyond this merely static aspect of appearance, some also want to investigate its genetic aspect, exploring, for instance, how the phenomenon intended—for example, a book—shapes (“constitutes”) itself in the typical unfolding of experience. Husserl himself believed that such studies require a previous suspension of belief (“epochē”) in the reality of these phenomena, whereas others consider it not indispensable but helpful. Finally, in existential phenomenology, the meanings of certain phenomena (such as anxiety) are explored by a special interpretive (“hermeneutic”) phenomenology, the methodology of which needs further clarification.