Está en la página 1de 20



It is no pure and simple truism to say that the philosophy of mathematics has always existed - in Western culture, at least - since philosophy itself came into being. In fact, not only were most of the early Greek philosophers mathematicians as well, but it is also well known that speculation within the Greek systems of thought was very often based on problems of a mathematical nature. Soon mathematics, besides being a source of philosophical problems and of their solutions, became also a direct object of philosophical enquiry, to such an extent that the early forms of philosophy of science were in fact forms of the philosophy of mathematics. This sort of privileged link between mathematics and philosophy has never been interrupted in Western thought, and even today there are philosophies that are in various ways influenced by mathematical thought (from the philosophy of Russell to that of Husserl, fromWittgenstein to Carnap and logical empiricism), while a large amount of research is being directed to the study of the 'philosophy of mathematics' proper. Underneath this undeniable continuity, however, considerable differences emerge between the ancient and modern conceptions of the philosophy of mathematics, and some problems, once considered of primary importance, seem now to have almost completely lost their value. To mention only a few of them, let us recall certain classical questions of the old philosophy of mathematics: What are numbers? What kind of relationship exists between mathematics and the structure of the world? How can the property of universality and necessity of mathematical truths be explained? In what sense is mathematical infinity to be conceived? Is mathematics man's invention or man's discovery? By attributing such questions to the past, we do not mean to confine them to ages far distant from our own. Even Dedekind, for example, entitled one of his most celebrated essays: "What are numbers and what must they be 9."; and Frege devoted the whole of his profound logico-mathematical research to the effort of pointing out the essence of the natural number - to mention only these two great authors. However, the theories
Synthese 27 (1974) 7-26. All Rights Reserved Copyright 1974 by D. Reidel Publishing Company, Dordrecht-Holland


that are nowadays set out in the field of the philosophy of mathematics and the answers that are sometimes given to some of the traditional questions, appear to be on a different plane: the subject of research, in fact, is not so much the subject matter of mathematical knowledge, as the way such knowledge is established. This happens because mathematics is no longer considered as something static, as a sort of naturaliter given object, which simply must be acknowledged in its more or less obvious philosophical sense. Mathematics is conceived not so much as something which exists, as something which is being done; it is more like a process than an entity. Consequently, one realizes that any philosophical consideration of mathematics cannot at all be univocal, but depends on the way of doing mathematics which one is willing to accept as adequate. In other words, it has become clear that mathematical knowledge is not an undifferentiated monolith, but depends on several conditions of inner construction, so much so that their examination directly influences the possibility of speaking of mathematical 'contents', determining in particular which of many possible contents may or may not be admitted as a subject matter of mathematics. Thus today the philosophy of mathematics appears to be essentially the study of the 'foundations' of mathematical knowledge, considered as a set of conditions on the basis of which certain statements can be made. In some cases, its purpose may be to determine which of those conditions are indispensable to the correctness of such knowledge; in other cases, its aim may be more descriptive than normative, tending to point out 'on what conditions' certain propositions can be made or admitted in mathematics, without further undertaking to define which of such conditions are indispensable and which, perhaps, are avoidable. Thus the interest in the subject matter of mathematics seems to disappear; but, on closer examination, it turns out that it has simply been deferred or, better still, subordinated to a methodologically prior clarification. In fact, before dealing with a certain subject, it is necessary to know in what different ways that subject can be dealt with. That is why - as we shall see later - the problem of content has not at all disappeared from the number of questions in which the philosophy of mathematics is involved today. This change of research has determined, as a consequence, a rapid increase of technical methods in the study of the philosophy of mathe-


matics, which has come to adopt more and more elaborate analytical techniques, developed by the various branches of mathematical logic. Thus the philosophical perspective has appeared to vanish, submerged, as if it were, by the large amount of technical detail, but it is possible to show that this is not so. In order to disentangle the apparently hopeless maze of this technical research, it may be useful to trace underneath it a few speculative lines which can be described in a non-technical language, and which thus reveal a close relationship with some of the most constant themes developed by philosophical reflection on mathematics throughout the centuries. The study of the foundations of mathematics can essentially be seen, from a philosophical point of view, as the study of three major themes. The first object of such study is the meaning of the well-known universally accepted fact that mathematics is the paradigm of exact science. Mathematics, in fact, can be considered as a kind of 'place' in which logical exactness can be 'applied' to the utmost degree, owing, perhaps, to the particular simplicity and abstractness of the structures and objects it deals with. One might, however, give a different explanation of this fact, considering mathematics not as something 'in which' logical exactness can be perfectly applied, but simply as something which 'is' the overall development of such exactness, identifying itself with a complex of logical relations, studied in all their various ramifications. Thus we are faced here with the classic problem of the relationship between mathematics and logic. A second set of problems arises when another indisputable fact is taken into consideration - the fact that mathematics has always presented itself, throughout its history, as an abstract discipline, but has nevertheless always dealt with specific subject matter of its own. Considering mathematics in this light, one might ask: what kind of knowledge can be attained through it? How can it be said to deal with contents and objects which are offered as 'data', and yet are no data at all from the point of view of sensible experience? We are here confronted with the problem of mathematical intuition, considered as a real source of knowledge, to be clearly distinguished from that further form of mathematical activity which consists in the systematic construction of various theories. Indeed, the most delicate point of this problem is precisely the comparison between the intuitive moment and the moment of theoretical construction, since



it is impossible to deny that, in many cases at least, mathematical theories are in fact an exact and systematic codification of what is known intuitively, and that, on the other hand, intuition is not sufficiently reliable unless it is supported by logical proofs. This kind of problem immediately implies a third one. Once it is assumed that the contents of mathematical knowledge must also be taken into consideration, how are they characterized? What kind of objects does mathematics intend to deal with? Here again we find the old problem of the nature of mathematical entities, which today is considered not so much as the subject matter of ontological research, as the research of some structure of objects, assumed as the structure of a primitive universe, from which everything else can be proved to be attainable by means of adequate constructions. In this sense, the study of foundations reveals a double aspect: on one hand, it still is an inquiry into the kind of condition that allows a reconstruction of mathematics as a whole, starting from a basic structure; on the other hand, it consists in a direct inquiry into the 'contents' of such a basic structure. Contemporary studies of set theory have, in fact, this twofold interest as well: on the one hand, they aim at considering that theory as a possible basis of the whole of mathematics (pointing out how, in mathematics, everything can, if necessary, be reduced to sets); on the other hand, sets can be considered as mathematical objects, which can only be studied directly, taking into account their specific features. Nowadays foundational research, thus summarized and reduced to some of its essential points, is carried out in different ways, which are, as we have already mentioned, of a highly technical and specialized character. In order to avoid the inevitable difficulty of comprehension that would result from a direct examination of the present situation, it might be useful to outline a preliminary historical analysis of the most important events that, during the past 150 years, have been characterizing the study of foundations. However, since the contemporary situation is the special topic of later reports and discussions, our historical survey will be focused particularly on that period of about a hundred years that goes from the discovery of non-Euclidean geometries to G/Sdel's incompleteness theorem. The fundamental features of mathematical thought just before the discovery of non-Euclidean geometries can be outlined as follows: math-




ematical disciplines were considered sciences with specific contents, dealing with particular 'objects', about which true statements could be made. The truth of these statements could be asserted in two ways: either through intuitive evidence (which applied to a small number of particularly simple propositions), or through proofs relating the truth of less evident propositions to that of primitive evident ones. This epistemological structure explains how mathematics appeared as a complex of theories in which logical exactness was very high, while the truth of the assertions was firmly and universally guaranteed. To this view, inherited from centuries of tradition, Kant gave his full authoritative support, attributing to mathematics the character of an 'a priori synthetic' knowledge, recognizing its full cognitive value. Thus he rescued mathematics both from the charge of being mere tautological analyticity and from the no less dangerous conception (already clearly outlined by Vico and Hume) of an exact knowledge, only considered as such because of its conventional origin. The a priori synthetic character of mathematics derived from its being founded on 'intuitions' of a very peculiar kind: the well known 'pure intuitions' of space and time. Thus mathematics, too, came to be considered as the science of a certain 'datum', acquiring, at the same time, the characteristics of objective knowledge. However, long before Kant wrote his Critique of Pure Reason, a new seed, which was to change completely the conception of mathematics, had been planted by Gerolamo Saccheri in his Euclides ab omni naevo vindicatus (1733). As is well known, the lack of intuitivity in Euclid's parallel postulate had already in ancient times induced some scilolars to try to prove it, starting from the remaining primitive propositions of Euclid's geometry, or else to substitute for it other, intuitively more evident postulates. Following the same line, Saccheri tried for the first time the way of reductio ad absurdum. To this purpose, he deduced a series of theorems by assuming among the postulates either one or the other form of negation of Euclid's postulate, thus constructing the first body of non-Euclidean doctrines, until he thought, however wrongly, that he had succeeded in his demonstration. All this is very well known, and we mention it here only in order to point out one fact in particular: when one undertakes to prove that a proposition is true, one can try to do so by proving that its negation implies a false consequence. What Saccheri tried to do, however, was more engagaing: he wanted to prove



that the negation of Euclid's proposition implied a contradiction. This is no irrelevant detail: a contradiction is, in fact, a kind of falsehood which rests on the formal structure of an argument, without referring to its intuitive contents. Besides, Saccheri himself had not been satisfied with the fact that along his non-Euclidean deductions there emerged theorems hardly reconciliable with ordinary geometrical intuition, but he thought it necessary to go as far as the demonstration of a formal inconsistency proper, and not simply as far as propositions that one might define only 'intuitively false'. Thus the seeds of a radical change were planted. If only inconsistency was considered the necessary and sufficient condition for falsehood in mathematics, it followed that consistency would be enough to ascertain mathematical truth. The fruits of this new approach ripened with the creation of non-Euclidean geometries, in whose gestation a succession of interesting stages can be detected. It is well known that Gauss gradually came very near the point of accepting non-Euclidean geometries, but he never took the final step. Why? Simply because, in his opinion, consistency was a necessary, but not sufficient, condition for mathematical truth. In this, he was still fled up with the traditional conception of mathematics as a science dealing with intuitive contents - a conception which had previously been affirmed by Kant. Bolyai and Lobachevski, instead, decidedly abandoned intuition, although they were not fully aware of this, and cannot certainly be said to have maintained the identification of mathematical truth with mere consistency. However, they contributed to developing a division between the two essential requirements of the mathematical argument - logical correctness and intuitive adequacy. The latter was no longer considered as a necessary condition for logical exactness; consequently, the concept of mathematical 'truth' itself came to be deeply affected, after having so far been derived from the combination of both those requirements. This change led to a real crisis, when it was proved that Euclidean geometry and the two possible kinds of non-Euclidean geometry had the same degree of consistency. This crisis started in 1868, when Eugenio Beltrami provided the first Euclidean model of non-Euclidean geometry, which was soon followed by other models, provided by great mathematicians such as Klein, Cayley, and Poincarr. Until then, in fact, it can be said that non-Euclidean geometries were cure formidine oppositi: no inconsistency had so far been found in them, but who could guarantee that



there was not any, perhaps hidden somewhere? The fact that a Euclidean model had been found for these geometries, although it did not offer any 'absolute' guarantee for their consistency, proved at least that they were as consistent as Euclid's venerable geometry. At this point, the situation was as follows: there were three geometries, all equally entitled to be considered consistent, and yet mutually exclusive (in the sense that, for example, while one said that the sum of the angles of a triangle is equal to that of two right angles, another said that it is smaller, and yet another that it is greater). Which of these geometries was true? The situation was quite complex: identifying truth with consistency, all three could be said to be equally true; but, in this way, the fundamental characteristic of the concept of truth was denied, according to which it is impossible to consider mutually exclusive propositions as simultaneously true. This characteristic, however, is based, in its turn, on the concept of truth as adequacy between the meaning contained in a proposition and the actual structure of objects it is dealing with. There were only two ways of overcoming this crisis: either by recognizing that the three geometries dealt with different objects, or by admitting that none of them actually dealt with objects at all, and the problem of truth or falsehood did not really exist for them. The first alternative was historically premature, as it needed a series of notions about the nature of 'models' of mathematical theories, that were to be developed only after 1930. It was natural, therefore, to choose the second alternative (which was also suggested by other developments in contemporary mathematics), and to the different geometries was attributed the nature of abstract theories, dealing with no particular kind of object, therefore consisting of no real 'propositions', but of simple linguistic constructions neither true nor false. Klein's famous results were to emphasize this conviction, showing that the passage from one geometry to the other could be simply reduced to the choice of one or another invariant within a group of transformations, thus establishing not only a closer relationship and an equal legitimacy between the different geometries, but stressing also the abstract character of their study. The last step in this direction was taken by David Hilbert, who, in 1899, propounded his famous axiomatization of elementary geometry in his Foundationsof Geometry. In this rigorous axiomatic construction, the primitive terms of a mathematical theory were no longer assumed to have



an intuitively known meaning, but a series of logical connections among those terms was axiomatically described and adopted as a kind of 'implicit definition' of them. For example, terms such as 'point', 'line', 'plane', etc. were meant to denote nothing precise in themselves, but could freely become the names of any objects, provided these formed a structure in which axioms could be interpreted in such a way as to become true. As the terms did not mean anything in themselves, so the axioms were considered neither true nor false in themselves, but could become so in an infinite number of ways, according to the types of interpretation that could be found for them in the most different structures of objects. In this way, Hilbert managed to find interpretations capable of showing, each time, the truth of all the axioms except one, thus proving its independence (that is, its indeducibility from the others). The proof of the independence of the parallel axiom came thus to be considered equivalent to the justification of non-Euclidean geometries; similarly, the proof of the independence of Archimedes' axiom was equivalent to the justification of non-Archimedean geometries, and so on. If we now reconsider the development of 19th-century geometry as a whole, as it has just been briefly outlined, bearing in mind the three fundamental points we started with, we can make the following observations: the requirement of formal exactness and logical irreprehensible clarity lost its purely expository character, becoming an integral part of geometrical knowledge, in the sense that axioms ensured, through their logical connections, not so much an elegant description of geometrical entities, as the very 'construction' of them; the point, the line, the plane, were nothing precise in themselves, but only what their axioms said. This meant the vanishing of any form of real mathematical knowledge which was not purely deductive, and in particular of mathematical intuition, which was reduced to a merely psychological fact helping in the choice of certain axioms, without however actually justifying them. More particularly, this meant that every reference to the so-called 'evidence', which formed the basis of ancient axiomatization, had been completely banished. Furthermore, the concept of mathematical contents, as well, was completely revoked: there were no specific 'objects' with which geometry could deal as specifically as, for example, a chemist deals with elements and compounds; geometry was simply a hypothetical-deductive knowledge, an argument moving from certain premises to certain con-



clusions, independently from the fact that there were objects that might be used to prove the truth of those premises and conclusions. This - it must be noted - was but an obvious consequence of the fact that the character of truth or falsehood had been denied to geometrical expressions. In fact, if there were specific objects proper to geometry (if, for example, a triangle were something existing autonomously), they should necessarily have or not have certain properties (like that of having or not having internal angles whose sum corresponds to 180 degrees). Therefore, each proposition concerning such objects should always be either true or false (for example, only one of the three propositions stating that the sum of the angles of a triangle is respectively equal, greater and smaller than two right angles, could be true, and the other two would be false). Such a change in the nature of the axiomatic method, particularly noticeable in the field of geometry, also took place in the field of another mathematical discipline, viz. algebra. For centuries this had been conceived as the theory of numerical equations, but in the beginning of the 19th century, after Ruffini's and Abel's discovery of the insolubility of equations above the fourth degree through radicals, the traditional view of algebra lost most of its interest, and from the discovery of limitations within that view, new perspectives started to emerge. It is well known, in fact, that from Galois's study of the causes of the above-mentioned insolubility, within the theory of transformation groups of equation coefficients, the group theory was to emerge, constituting a fundamental branch of modern abstract algebra. Besides, another new perspective developed between 1830 and 1847 in the school of English algebraists such as Peacock, De Morgan, and Boole. In his well known Treatise on Algebra, Peacock put forward the fundamental observation that the correctness of operations and calculus in normal numerical algebra is based, not so much on the fact that numbers and their operations have some 'intrinsic' properties as on the fact that certain explicit rules in the use of the operation signs are respected. Consequently, the very nature of operations was defined by these rules; and sets of different rules, or old rules used without certain limitations, could determine algebras which were different from the traditional numerical ones (known as 'symbolic algebras'), but were equally legitimate. This idea directly foreshadowed the modern conception of abstract algebra, conceived of as a theory of arbitrary operations, explicitly defined on



indefinite sets of objects. Besides Peacock, both De Morgan and Boole contributed to the concrete realization of this idea. The century, then, saw the development of other algebras as well, different from that of numbers - such as the algebra of matrices (Cayley), of quaternions (Hamilton), of vectors (Grassmann and Peano), etc. Of these, perhaps the most interesting for us is the algebra of logic, prepared by De Morgan and effectively developed by Boole in 1847 and 1854. The importance of the creation of an algebra of logic was twofold: first, it revealed how the formal character of logical deduction, already dearly foreseen by Leibniz in view of a possible translation of it into calculus, could be opened to the new perspectives of 'logical calculus'; secondly, it led to a significant incorporation of logic into mathematics. In fact, if it was true that the algebra of logic was only one of many possible algebras, it was then quite clear that mathematics, incorporating all the various kinds of algebras (which, after all, identified themselves with all the various kinds of mathematical theory), could incorporate logic as well, as one of its many branches. The development of the algebra of logic, which we will not examine here in detail, together with the more general development of algebraic research in a broader sense, produced the following effects, with reference to the three fundamental points of view we have agreed to adopt: on one side, they emphasized the formalistic and abstract character of mathematics, and reinforced the conception according to which the axiomatic method postulated or created mathematical objects, thus eliminating the idea of the subject matter of mathematical knowledge and diminishing the importance of intuition; on the other side, the relationship between logical exactness and mathematics was stressed to the point that logic itself became a branch of mathematics. While the study of the foundations of geometry and algebraic research developed along these lines, a dialectically opposed result was obtained by the research on the foundations of analysis. This science, which came into being towards the end of the 17th century, greatly flourished throughout the 18th century, and was authoritatively codified in the great treatises by Euler and Lagrange. However, the fundamental concept on which it was based - the concept of 'infinitesimal' - was still a vaguely intuitive one, and was identified with the imprecise notion of an 'infinitely small' quantity. For a time, this did not cause any technical difficulties, only



arousing perplexities and protests on the part of some philosophers, but in the beginning of the 19th century the lack of precision in the concept of infinitesimal began creating difficulties, especially within the theory of series. This need for 'exactness', then, persuaded almost all the great mathematicians of the time to give their contribution to the logical clarification of anlysis. At first, this critical clarification was made possible by a precise formal statement: towards 1820 Cauchy gave his well known definition of the concept of limit, which - being no longer linked with geometrical or physical intuitions, but being of a totally explicit and formal character - not only freed analysis from its more or less explicit dependence on mechanics and geometry, but also provided it with the indication of the basic operation to which all the others could be traced back. It is, in fact, well known that the definitions of a derivative, an integral, a differential, the sum of a series, the continuity of a function, are all explicitly based on the use of the concept of limit. After this first stage, a new formal concept came to replace in analysis the old one of infinitesimal (which can also be properly defined by using the concept of limit), but thus analysis seemed to acquire the character of a discipline with no specific domain of 'objects' to deal with. Soon, however, this domain of objects was found - that of real numbers, in whose field the operation of limit could always be applied. Thus real numbers (and shortly afterwards, by extension, complex numbers as well) became the object of analysis, which again acquired the character of a discipline with a specific content. This new awareness was followed by a rapid reformulation of theorems and definitions, which had previously been expressed in the more or less figurative language of geometric or physical intuition, but were now based solely on real numbers and their operations. This is the first sense of the expression, 'arithmetization of analysis', which was used to indicate this process of critical revision (arithmetizing meaning, so far, simply reducing the arguments to numbers). Very soon, however, the concept of arithmetization of analysis acquired a more complex meaning, which derived from a research aiming at understanding 'what real numbers were'. In 1872 various definitions of real numbers were simultaneously put forward, starting from rational numbers: Dedeldnd, Lipschitz, Cantor, Meray and Weierstrass gave some still well-known definitions, all characterized by the fact that a real number was determined by resorting to an infinity of rational numbers.



The conclusion which was reached at this point was that, in order to define real numbers, it was enough to know what rational numbers were. On the other hand, the study of the so-called extensions of numeric fields had, already for some time, made it clear that it was possible to arrive at the 'construction' of rational numbers starting from natural numbers. Thus, finally, natural numbers came to be conceived as the primitive mathematical entities, starting from which all the other kinds of number, real and complex, could be obtained. It was in this spirit that Kronecker uttered his famous saying: "God made natural numbers, all the rest is man's work." Now, considering that the theory of natural numbers is arithmetic, it is easy to understand how this programme of reducing analysis to the basis of natural numbers was meant as a reduction of analysis to arithmetic. It is in this deeper sense that the above-mentioned expression, 'arithmetization', must be understood. At this stage, we can say that analysis had reached a position comparable with that of the most advanced Euclidean geometry. A precise domain of objects had been determined for it, and they could be dealt with as its contents; its various theoretical concepts had been given rigorous formal definitions, and it could now make true statements about its own objects, starting from a particularly intuitive and secure basis (natural numbers). and proceeding with logical exactness in the development of the various constructions. As for intuitivity, it is true that the old intuitivity of a geometrical or physical kind had been abandoned, and that, for example, the cases of continuous and non-differentiable functions put forward by Weierstrass hardly agreed with that kind of intuition, but a different type of intuition had been introduced - precisely that of the proper objects of real and complex analysis. But could it all stop here? Or was it not inevitable for the critical question, "What are real numbers?", to be replaced by the new one - "What are natural numbers?" Here the various positions became more complex. Certain mathematicians, perfectly aware of the fact that the research of a primitive foundation must at a certain point stop, thought that natural numbers were the right stopping point, beyond which no simpler, more familiar and intuitive, mathematical structure could be conceived. This was the position of the already mentioned Kronecker, shared in part by both Dedekind and Peano. Dedekind, in fact, tried to give an answer to the question, "What are natural numbers?", and thought he had



found that answer in logic. His answer, however, was not a deduction of the concept of natural number from notions of pure logic, but was essentially a subtle logical analysis of the 'contents' implied in the intuitive notion of natural number, supported by some elements of the logic of classes. The result was an axiomatization of arithmetic as a science dealing with specific contents, quite similar to Euclid's axiomatization of geometry. Peano, instead, was more clearly on the line of the primitivity of the concept of natural number, of which he gave a purely formal axiomatic characterization, substantially similar to that of Dedekind, and yet different in spirit. (It did not, in fact, pretend to be a 'foundation', but simply a sort of description of the structure of natural numbers.) Other mathematicians, however, followed different paths, trying to prove that not even the concept of natural number was primitive, but was definable starting from something deeper. These other mathematicians were two giants of mathematical thought - Frege and Cantor. Frege thought that the natural number could be defined starting from pure and simple logic. According to him, in arithmetic there was no notion so irreducibly peculiar that could not be derived, by means of exact definitions, from purely logical concepts, such as those of class and relation. Through a patient and majestic work, he undertook the reconstruction of arithmetic starting from these logical bases, carrying out the task of proving that mathematics was but a branch of logic - exactly the opposite of what the creators of the algebra of logic had maintained. This pre-eminence of logic over mathematics was claimed by him also from another point of view. Since its origins, logic had been given the responsibility of guaranteeing everywhere, and in the field of mathematics in particular, the existence of the requirements of exactness. This had given to logic a sort of right and duty of supervision over mathematics, which had been lost in those systems of thought where logic had become a branch of mathematics. Frege, instead, assigned back to logic its proper role, as the discipline of full deductive exactness, setting forth the precise formulation of a series of deductive rules, according to which (without referring to more or less vague intuitive connections) a proposition could be said to be the immediate logical consequence of certain others. In other words, he no longer constructed an algebra that could be interpreted as the abstract expression of some well known parts of traditional logic (such as, for example, syllogistics), but he endeavoured ex professo


to determine a correct and complete system of logical rules, on the basis of which it was possible to justify all the correct proofs of mathematics. In doing so, he adopted a descriptive and objective point of view, deliberately searching for certain contents, as he thought that the laws of correct thinking and arguing have a reality of their own, which is our task to discover, but whose existence and validity does not depend on us. This attitude was also reflected in his conception of mathematics, considered by him as a science of veritable capacity and objective intents, like logic, of which it was in fact only a branch. Going back to our usual tripartition, we can notice in Frege a return to the conception of mathematics as a science dealing with specific contents, with a field of objects about which it could make true statements, while the relationship between mathematics and logic was conceived by him in a new way, as an inclusion of the former in the latter. And it is precisely this that characterizes the peculiarity of his position. But the research of a foundation of analysis, of a deeper objective kind than the natural number, did not obtain only this type of result. Moving from another direction, not so tied up with logical requirements as with the technical problems of analysis, Georg Cantor, between 1874 and 1897, managed to work out the theory of sets in which the concept of number was defined in very generic terms, applying such generalization to both the 'ordinal' and 'cardinal' views of the act of counting. Both these generalizations were characterized by the fact that, in them, one could speak of numbers without necessarily knowing how to 'count', and only using notions connected with sets. Natural numbers came to be considered as particular cases of both ordinal and cardinal numbers, in fact, as their finite case, while the theory expatiated in the wider space of the transfinite, characterized by the deliberate use of the concept of 'actual infinity', which for a long time had been considered with suspicion. Thus set theory allowed a 'definition' of natural numbers and of their operations, not unlike the one which is to be found in Frege's work, which provides analysis with the starting point for its constructions. Furthermore, set theory offered the means of actually obtaining real numbers, since this process essentially implied, as has already been mentioned, the consideration of infinite classes of rational numbers and, therefore, the use of notions and operations of sets. It is perfectly correct, then, to assert that the conceptual development



of the research on the foundations of analysis reached results which were dialectically opposed to that of the research on the foundations of geometry. In the field of analysis, an objective interest in contents was revived, while an adequate structure of objects was sought as a 'foundation' for the whole construction (be it the structure of natural numbers for Dedekind and Peano, or the structure of sets for Cantor, or the structure of logical entities for Frege), whereas, in the field of geometry, research led to a perspective of total formalization, based on a sophisticated use of the axiomatic method as a pattern for all hypothetical-deductive knowedge. In the one field, mathematics was still considered as a discipline characterized by the possession of particularly firm, universal and necessary truths, while, in the other field, it seemed detached from the problems of truth and falsehood. On one side, intuition still had a role to play and was sustained by logical argumentation; on the other side, it was practically banished and everything was entrusted to pure formal deduction. However, a rather dramatic fact came to upset again the situation, re-uniting, in several aspects these two tendencies. It was the foundational crisis which followed the discovery of antinomies in set theory. The facts are too well-known to be mentioned here in detail: Burali-Forti, Cantor, and Russell discovered contradictions which could not be eliminated, either from Cantor's set theory or from Frege's foundation of arithmetic on logic, because they were made up of pairs of mutually contradictory propositions which at the same time are both implied by intuitive assumptions. Thus, even that conception of mathematics which, relying on the full intuitive evidence of certain elementary logical principles, did not seem to have to fear the rising of contradictions, once correct deduction started from those principles, was abruptly faced with the problem of consistency. It is enough to mention here Russell's antinomy which is constructed by applying with no restriction the following two principles, which seem incontrovertible: each condition expressed in a proposition determines a set (that is, the set of objects which satisfy that condition); it is possible to construct sets whose elements are sets in their turn. The consequence of this fact was clear: intuitive evidence, which since the time of non-Euclidean geometries had been considered a condition not necessary to guarantee the consistency of a mathematical argument, appeared now as a condition even not sufficient for it. Therefore, it was not only the advocates of abstract formalism, the constructors of axiomatic


systems conceived as complexes of propositions neither true nor false, who had to face and solve the problem of the consistency of their axioms, but also those who had thought it possible to follow the most traditional lines of mathematics. This fact marked a further strengthening of the axiomatic point of view. In view of the crisis of antinomies, there in fact emerged a kind of diagnosis which can more or less be outlined as follows: we are in difficulty because we have relied too much on the intuitive (someone even called it 'naive') notion of set. Let us try, instead, to axiomatize set theory as well, as has already been done to so many other branches of mathematics, and let us construct axiomatic systems in an adequate way, so that no antinomies can be deduced from appropriately chosen axioms. And this is what actually happened. In 1907 the first axiomatic system for the theory of sets was put forward by Zermelo; and that system, later modified by Fraenkel, is to-day still one of the most frequently used axiomatic expositions of that theory. As is well known, other axiomatic set theories have been constructed u p to our own time. The names of Skolem, yon Neumann, Bernays and Grdel, among others, are to be added to the names of Zermelo and Fraenkel in this field. The successive constructions are of quite relevant mathematical and philosophical interest, but we cannot deal with them here. We must, instead, ask a question that brings the construction of axiomatic set theories within the general perspective of this essay: why have so many axiomatic set theories been constructed and are still being constructed now 9. The answer is twofold: first, the different axiomatic theories that have been put forward, while excluding the inconsistencies which so far have been found in the intuitive set theory, do not possess (for reasons that we shall mention later) an absolute guarantee of consistency, that is a guarantee against any possible appearance of future inconsistencies; secondly, none of them manages to solve completely all the mathematical problems connected with the intuitive theory (for example, the well known 'continuum hypothesis' is still to-day an 'independent' proposition in all the best known axiomatic set theories, being neither provable nor disprovable within them). Each of these two facts has a specific meaning: the first fact points out that the crucial problem for sets, too, is at a certain point to erect a formal theory for them which would enable us to 'construct' them and which would offer consistency as a guarantee for such constructions. The



second fact, instead, indicates that such a perspective is not soAficient, because, even assuming that the axiomatic set theories historically propounded are free of inconsistency, it has not impeded the interest in the research for new theories. This research aims at defining an ever-increasing number of 'intrinsic' properties connected with sets, considering them from a descriptive point of view. Thus, the two fundamental points of view which we have outlined at the beginning of this essay are both present also in the research that guides the construction of axiomatic set theories. After the presentation of the first axiomatic set theories, the formalistic point of view - according to which the whole mathematical domain is to be reduced to a series of abstract axiomatic constructions, guaranteed only by their consistency - seemed to be destined to prevail as the definitive perspective on the nature of mathematics and the entities it deals with. This was, in fact, the most authoritative and widespread position in the 1920s. It was codified by Hilbert in a famous 'programme' which, in an approximate and simplified way, could be summed up as follows: let us formulate the fundamental mathematical theories in a rigorously formal axiomatic way, and let us try to prove the consistency of each of them; if we succeed in this enterprise, then we shall have definitely solved the problem of foundations, and we shall have a secure mathematics. The practical realization of this programme implied two choices: from which axiomatized mathematical theory to start, and what means to use in the consistency proofs. The first choice was rather obvious: it was reasonable to start from the simplest theory, that is, from elementary arithmetic. But, however simple, this required, for its development, the use of rather complex logical instruments: from a methodological point of view, it would not have been very correct to use analogously complex means to prove its consistency. In order to avoid this methodological objection, Hilbert proposed that on the level of 'metamathematics' (that is, of a treatment 'above' mathematical theories, specifically intended to determine their consistency), very elementary and 'secure' methods should be adopted, the so-called 'finitistic' methods of combinatory character. It is important to notice that the resort to these methods implies a resort to intuition (their security, in fact, rests essentially on their full intuitive controllability). Furthermore, the whole meta-mathematical treatment consciously deals with 'contents': in fact, the formal systems, in which mathematical theories are abstractly formalized, become concrete ob-



jects, to which recta-mathematical enquiry is directed with objective intent and with the purpose of making 'true' statements about them. For example, it aims at proving that a certain formal system is truly consistent.) Thus, even within Hilbert's formalistic programme, we can notice the presence of an objective and intuitive discourse, together with a pure formal treatment. After about ten years' research, which only managed to prove the consistency of formal systems weaker than the whole elementary arithmetic, in 1931 Grdel's famous "incompleteness theorem" proved the impossibility of carrying out the original programme. In fact, G~del's theorem showed that neither elementary arithmetic nor mathematical theories in general, capable of expressing within themselves their own syntax - and, least of all, set theory - could offer a consistency proofs based on methods that could be formalized within the theory itself. So, not only by using finitistic methods (which can be trivially formalized in arithmetic), but even by using other more powerful instruments, it was impossible to prove the consistency of arithmetic, unless these instruments overcame the complexity of what arithmetic itself could express. Yet another fact was added to this: the keystone of Grdel's theorem consisted in the construction of a proposition of elementary arithmetic which must be recognized as a 'truth' about common natural numbers, but which cannot be proved at all in formalized arithmetic. Now the truth of this proposition was established by an 'objective' argument in recta-mathematics. Consequently the range of such objective truths had to be much wider than that of formally demonstrable truths. This was already an implicit recognition of the fact that the reduction of mathematical knowledge to a purely formal domain was inadequate; besides, it again implied an 'ontological' question: if numbers have formally indemonstrable properties, their 'mathematical existence', too, is something autonomous as regards formalization. The overall result of this research can perhaps be summed up as a situation of balance between the opposite tendencies in the philosophy of mathematics: the purely descriptive point of view connected with 'contents' had revealed its limits in the fact that it could not guarantee mathematics from the appearance of inconsistencies; besides, it could not, by itself, guarantee sufficiently the existence of certain mathematical 'truths' (to this purpose, we have already mentioned the example of the continuum hypothesis, which is quite clear from the point of view of its



'content', and yet neither acceptable nor refutable on that basis, for want of a formal proof or refutation of it). However, it had also been recognized that no kind of mathematics can be done without going back to the rerecognition of a certain intuitive knowledge of contents. This fact had emerged particularly from the failure of the extreme formalistic programme, which did not certainly mean the set-back of the requirement of formal exactness and which could not wipe out the great advantages that the use of formalism assures to mathematics (one only has to think of the 'polyvalence' which it assures to the formal systems of the different mathematical theories). It simply meant the impossibility of reducing mathematical knowledge to a complex of intellectual 'constructions' of an exclusively formal logical character. In this situation of balance one must notice, on one side, the re-emergence of semantic problems within mathematical logic. (In particular, the solution of the problem of consistency for a formal system was from then onwards sought especially through the construction of 'models' to it.) On the other hand, one can notice the ever-increasing importance of a movement of the philosophy of mathematics which, since the time of the First World War, had appeared on the scene - the intuitionistic movement. Its fundamental position consists in claiming for mathematics an autonomous cognitive status, which separates it both from a purely logical formal foundation, constituted byits formulation in axiomatic systems, and from its reduction to logical structures reputed to be more profound, such as those of sets. For the intuitionists, the foundation of mathematical knowledge is a particular 'intuition' which is specific and prior to any 'exposition' of it in one or another particular language. In conformity with these philosophical premises, intuitionism had elaborated a series of methodological prescriptions about the correct way of constructing mathematical theories, explicitly pointing out the necessity of not using instruments that were not 'constructive'. After Gtidel's result, the need to prove the consistency of mathematical theories certainly did not diminish, although it now became clear what instruments could not be used for the purpose. Thus, it was necessary to know which instruments could still be considered as 'secure', without, however, being subject to the limitations pointed out in GSdel's theorem. From a demonstration of consistency of arithmetic, set forth by Gentzen in 1936, it became gradually clear that such 'secure' methods could be considered the intuitionistic ones, so that



it has lately been possible to speak of a reconsideration of Hilbert's programme in precisely this new sense. At this point, the task of this historical introduction can be considered completed. We have tried to describe the philosophical motivations which are at the basis of the fundamental trends of the philosophy of mathematics of our times. Contemporary research is still concerned with contents, particularly in the field of set theory, which not only constitutes the basis for the whole of semantics, but must in itself be considered as a theory which, however much it can be formalized and axiomatized, cannot yet receive guarantees for its correctness from outside, and must therefore be itself the source of its own axioms: it can thus be considered also as the most plausible ontological basis of mathematical knowledge. At the same time, the formalist concern has by no means exhausted itself: not only has the level of all mathematical treatments become increasingly more formal - with highly formal theories such as algebra and topology being the most characteristic ones of modern mathematics - but also formal research in a strict sense is developing in most interesting directions. While the concept of proof is being generalized, and new approaches are being given to the inquiry into pure formal systems and into the conditions of their consistency, the very concept of calculus is being widened. Finally, the renewed interest in epistemological and methodological problems connected with the question of intuition and with the exigencies of constructivity, is bringing new life to the intuitionistic school as well. These are, in conclusion, the most important tendencies which contemporary inquiry into foundations places before the philosopher of mathematics, and which the history of such inquiry has been clarifying within itself. Compared with the past, however, these tendencies present quite new aspects: after their historical development has, together with their strong points, clarified their limitations as well, we have today a situation of interpenetration and complementarity" in which the different points of view, after age-long dissent, seem to have found in the union of their efforts a new strength to face the difficult problems which are still to be found today in the research on the philosophy of mathematics.

University of Genova