Logical positivism and its tenets
A. J. Ayer outlined the tenets of the logical positivism philosophical movement (also referred to as the Vienna circle, Berlin circle, logical empiricism, and neopositivism) in his 1936 work Language, Truth, and Logic (LTL). It was an important intellectual development of the early 20th century and many prominent scientists and philosophers were associated with this movement—Einstein, Bohr, Gödel, Carnap, Russell, Wittgenstein, etc.
The basic premise of logical positivism is that only statements that are empirically verifiable or provable by logical deduction are cognitively meaningful. Central to achieving this end is what is known as the verifiability criterion of meaning.
The strong-form of the verification criterion means that something has to be conclusively verifiable or conclusively falsifiable to be cognitively meaningful. Ayer doesn’t explicitly state that this also implies complete semantic meaninglessness and incoherency, however.
A sentence or claim is only true or false, as to conclude otherwise would be to violate the law of excluded middle—a sentence has to be either be true or its negation be true. (This, of course, doesn’t factor in later non-classical logics.) Any truth claims outside of the domain of empiricism or logic were considered beyond what we can possibly know so we cannot make any comment to the truth or falsity of such claims.
Therefore, the cognitively meaningful statements are ones which are shown to be either true or false by consulting our senses or proven to be true or false by using logical induction, such as the truths of mathematics.
The strong-form of verification is problematic when one considers universal statements such as “all swans are white” (as black swans exist). This was pointed out by Karl Popper and is central to his notion of falsifiability.
The weak-form of verificationism was thus a revision of the original strong-form verificationist principle. It implies that we can accept that universal statements such as “all swans are white” are true until shown to be false; therefore, we can assign a degree of probability that such statements are true or correspond to objective reality.
It implies that we can accept that universal statements such as “all swans are white” are true until shown to be false; therefore, we can assign a degree of probability that such statements are true or correspond to objective reality.
The verificationist principle is limited to what our senses can access about the ultimate reality of nature and this is a distinction which is originally outlined by Immanuel Kant. A lot of Kant’s philosophy is evident in Ayer’s work as much of modern philosophy is in his debt.
Any statements which could not, in theory, be empirically verifiable and contribute to the discovery of further empirical facts are meaningless under the weak-form verification principle as well as the strong-form of the verification principle.
Problems with logical positivism
The distinction between falsification and verification pointed out by Popper was an important one. In the sense that a hypothesis can be proven to be false or shown to be false; however, no amount of experimental verification or observation can ever prove that we can say with 100% certainty that a scientific hypothesis will always be true—even the laws of gravity, etc. This type of truth or falsity and certainty is only reserved for mathematical or logical truths.
The falsification principle has also been attacked by different philosophers of science that were also Popper’s contemporaries. Popper didn’t see the falsification criterion to be a criterion of meaning but a criterion of demarcating science from pseudo-science. This has some major shortcomings. Feyerabend and Khun are central to that debate in the philosophy of science.
The verification principle is itself analytic (so it can’t be shown to be true by using empirical methods) and it is NOT also tautologous. Wouldn’t the logical positivist then say that the verificationist principle is meaningless?
Ayer gets around this by stating that the verificationist principle is not meaningless as it has empirical application and, therefore, contributes to our empirical understanding of the world.
Ayer and the logical positivists would only be concerned with knowledge that would be considered “scientific knowledge” to him or knowledge that we can know through other empirical methods (such as through our senses).
Pragmatics and semantics do not factor much into the criterion of meaning much beyond the verificationist principle, and all things that have an evaluative aspect or are supra-natural are meaningless regardless of their purpose (such as if religion is consoling or if ethical positions are necessary for us to live good lives, we are only talking about emotional states, etc.).
Ayer pretty much rejects this type of discussion as meaningless. Whether he meant literally semantically meaningless, he doesn’t clarify in LTL. Ultimately, however, even the most austere truths such as those in mathematics and physics still contain an evaluative aspect (one theory or proof may be more beautiful or elegant than another). It’s unclear what the logical positivist’s position would be with respect to this type of problem.
What the logical positivists were trying to delineate was an objective ontology of reality. Since value judgements cannot be evaluated objectively (but arguably all judgements we make about reality are in this category), they had to be rejected by the logical positivist and could not be thought of as empirical facts, and, hence, not part of our scientific understanding of the world.
Ayer said in LTL,
I own that it is possible to influence other people by careful use of emotive language, but maintain that a value judgement is not a proposition and therefore neither true nor false … [S]tatements of value are not controlled by observation, as ordinary empirical propositions are, but only by a mysterious ‘intellectual intuition.’ A feature of this theory, which is seldom recognized by its advocates, is that it makes statements of value unverifiable.
In the strictest sense of the verificationist principle that would mean Ayer would reject any statements of value, even for pragmatic ends, but he doesn’t explicitly say, so we can’t take that for granted.
What is the analytic-synthetic distinction?
Quine defines “analyticity as truth by virtue of meanings” and independent of experience.
Synthetic knowledge is the knowledge that we know because we have consulted our senses to justify its truth.
This was the basis of the logical positivist project. All types of cognitively meaningful knowledge fell under either one of these definitions.
Since analytic truth cannot be true by consulting experience, it is known to be true independent of experience.
“All bachelors are unmarried” is true by definition and is analytic, but “All bachelors are unhappy” is not true by definition as you need to consult sense-experience to know the truth or falsehood of this latter statement.
7+5=12 is a priori true but it is NOT analytic in the sense where we know what the meaning of 12 is by analysing the contents of the numbers 5 or 7.
Even though mathematics is tautologous, Kant thought that truths in mathematics were “synthetic a priori”.
Quine doesn’t distinguish between the different types of a priori or analytic truths the way Kant did and neither did the logical positivists in his criticism of the analytic-synthetic distinction.
When Quine outlined his conception of analyticity, he groups both types of a priori knowledge together.
So we can leave Kant’s synthetic a priori aside but it is a very interesting case.
Therefore, analytic truths encompass only those that are true independent of experience.
Cognitive synonymy is when the truth value (salva veritate) of a proposition is retained by substituting synonymous terms into the proposition.
Bachelor and unmarried man are synonymous terms.
So, we could claim, “All bachelors have not entered into a marriage contract” is true, just the same as if we were to claim, “All unmarried men have not entered into a marriage contract” is true.
Whenever we substitute synonymous terms into a proposition, we, therefore, can preserve the proposition’s truth value.
Quine’s attack against the analytic-synthetic distinction
Analyticity and cognitive synonymy were attacked by Quine in his essay “Two Dogmas of Empiricism” because he didn’t think there is a coherent definition of analyticity.
Quine doesn’t think there is a distinction in kind between analytic truths or synthetic truths but rather these two types of knowledge differ in degree.
One of the examples Quine uses to reject the analytic-synthetic distinction is “the morning star” and “evening star”. Both words are synonyms and both words have a component that is true due to empirical observation and true due to analyticity. The “morning star” and “evening star” are synonymous terms because our scientific observations have shown us that this is the case—by using telescopes and our knowledge of astronomy, etc.)
Quine also cites the number of planets in the solar system as another related example:
The terms ‘9’ and ‘the number of the planets’ name one and the same abstract entity but presumably must be regarded as unlike in meaning; for astronomical observation was needed, and not mere reflection on meanings, to determine the sameness of the entity in question.
This is also a violation of the synthetic-analytic distinction.
So it’s not that he thinks analyticity is meaningless, but he does not see there is a clear distinction between the analytic and the synthetic. Hence there are degrees of meaning that overlap between both the terms analytic and synthetic. This is a violation of the standard dogma that has been true at least since the rationalist-empiricist divide (dating back to Hume, Descartes, etc.)
This is evident in this quote from Quine:
But, for all its a priori reasonableness, a boundary between analytic and synthetic statement simply has not been drawn. That there is such a distinction to be drawn at all is an unempirical dogma of empiricists, a metaphysical article of faith.
Therefore, analytic and synthetic knowledge differ in degree and not in kind. Not due to degrees of probability but just due to the way we think about those concepts.
It is clear that Quine starts to take the analytic tradition in philosophy down the pragmatic path and away from the some of the problems inherent in logical positivism:
Carnap, Lewis, and others take a pragmatic stand on the question of choosing between language forms, scientific frameworks; but their pragmatism leaves off at the imagined boundary between the analytic and the synthetic. In repudiating such a boundary I espouse a more thorough pragmatism. Each man is given a scientific heritage plus a continuing barrage of sensory stimulation; and the considerations which guide him in warping his scientific heritage to fit his continuing sensory promptings are, where rational, pragmatic.
Logical positivism, however, still has had a lasting impact on the analytic philosophical tradition and the philosophy of science up until the current day.
A. J. Ayer (1936), Language, Truth, and Logic, London: Gollancz, 2nd Edition, 1946.
W. V. Quine (1951), “Two Dogmas of Empiricism”, Philosophical Review, 60 (1951): 20–43; reprinted in From a Logical Point of View, pp. 20–46.