Read Part I, Part II, Part III, Part Interlude, Part IV, Part V, Part VI. Part Last.

In going through Feser’s book, many in the comments profess to be confused about what truth means, and about the difference from there being one overall (or foundational) Truth, and many individual truths. I believe one of us even took up Pilate’s speech and asked, without irony, “*Quid est veritas*?” (To even ask the question presupposes its existence.)

Several other readers also claimed to be empiricists, which are those who believe in truth but say that all truths are discovered solely by observation. Empiricism is false: all truths cannot be discovered by observation. Quite simply, even its defining statement is self-contradictory. I asked yesterday how do we know, since we have not observed them, that there are an infinity of numbers. Answer came there none. This small example proves empiricism false.

But since those who wish to hold to empiricism (for fear of what abandoning it implies?) will not be satisfied by so telegraphic a proof, here is a longer one, given by the (non-theist) philosopher David Stove. I find it exceptionally lovely. We’ve seen this before, but today we’re seeing it again.

Stove shows each of us must come equipped with knowledge which cannot be learned. Stuff that is only known to be true only through introspection, via what we call intuition or faith, or what yesterday we called revelation; philosophers usually settle on the technical term *a priori* (or on phrases more technical still).

This is just one (of many) proofs given by David Stove in his *The Rationality of Induction*^{1} He made this argument in the support of revealed knowledge in his larger work showing induction is reasonable^{2}. A man named Bolzano is named in the proof: all you need know about that gentleman is that he disputed the idea that we all of us come with built-in knowledge.

Reading this passage, as with reading any proof, requires some sophistication. This cannot be avoided. The formula numbers are as they appeared in Stove’s book. The unseen formula “(149)” is here equivalent to “(166)” below.

First, as to our knowledge of validity. Bolzano says that the validity of

barbara, or rather, that thebarbaraschema always preserves truth, is a hypothesis reasonably believed by us, just because of the extensive experience we have had of never finding a counter-example to it. That is, our grounds for believing (149), or rather, for believing(166) For all

x, allF, allG, either ‘xisFand allFareGis false’, or ‘xisG‘ is true,consist just of observations we have made, such as

(151) Abe is black and Abe is a person now in this room and all persons now in this room are black.

That is putting it starkly; still it is, in essence, what Bolzano believes. We learn deductive logic by inductive inference.

But now, this is tacitly to concede, to certain propositions of

non-deductive logic, precisely the intuitive status which Bolzano expressly denies to any proposition of deductive logic. Our putative logic learner is supposed to be devoid of all intuitive logical knowledge. Yet Bolzano is evidently crediting him with knowing, straight off,at least this much: that(167): (151) confirms (149).

Of course, he need not be supposed to

knowthat he knows (167); still, he is evidently being supposed to know it. But to know (167) is to have some logical knowledge, even is only non-deductive logical knowledge.And Bolzano must suppose that (167) is known by our logic learner intuitively. Otherwise he would have to have learnt it, as he is supposed to be learning (166), by experience. And how would he accomplish this?

It must at any rate be from some observation-statements. I do not know what kind of observation-statements Bolzano would regard as confirming (167): let us just call these observation-statements

(167) O

_{1}.But even if our logic learner

hasfound by experience that O_{1}he will be no further advanced. To learn (167), he needs to know, not only that O_{1}, but that(169): (168) confirms (167).

But this is a proposition of logic too. If he does not know (169) intuitively, as by hypothesis he does not, then he will have to learn it, too, from experience. No doubt from some observations

(170) O

_{2}.But that is not enough. He will also need to know that

(171): (170) confirms (169);

and so on.

Obviously, he is never going to make it. Experience is

notenough.

As a sketch: to even know that an observation confirms some statement is to use the knowledge that “observations confirms this statement”, and the knowledge of that could not have been discovered observationally, or empirically. We must already know (at least) this before we begin. Just as we must know the axioms before we begin mathematics. Axioms by definition are truths which cannot be proved.

Read Part I, Part II, Part III, Part Interlude, Part IV, Part V, Part VI. Part Last.

————————————————————————————-

^{1}p. 162-163. This book, especially the second half, is a treasure that all statisticians, probabilists, and logicians should read.

^{2}Yes, some people think it isn’t. Bolazno was not one of these: he thought all (as in all) knowledge was known empirically.

I don’t know of anyone who sincerely proclaims that Empiricism is the metaphysical path to the absolute truth.

So this post is like a strawman.

Good grief, mr Briggs. You surely shoot a lot of bullets to the sky. I wonder when you’re actually gonna confront the arguments put against your position and start aiming in the right direction, for I just feel bored right now.

Luis,

“I donâ€™t know of anyone who sincerely proclaims that Empiricism is the metaphysical path to the absolute truth.” Then you have very little experience in this area, for those that do make this claim are many. Like Bolzano. (Look him up.)

I commented that I think I am an empiricist in the last set of Comments – though admitted I could be wrong.

On Prof Briggs’ definition I was wrong and aren’t an empiricist, as quite definitely I don’t think that “all truths are discovered solely by observation.”

I totally agree with mathematical/logical truths separate from observation – ie if R = â€œAll cats are creatures understanding French and some chickens are catsâ€ then the proposition F = â€œSome chickens are creatures understanding Frenchâ€ is true.

I also totally agree that I make assumptions about the world, and about how I am going to examine both those assumptions and the world when I reason about things (and I assume these assumptions are true, and if someone was to ask me whether it was true that I assume these assumptions are true, I’d say yes its true – wow, truth must be real hey – there must be a God).

My empiricism comes from thinking that if I am going to try to undertake science – ie apply my assumptions to the world, I am going to choose assumptions which I believe will apply to the world.

The parallel lines axiom is a good example of this, it felt right, felt justifiable and so it took centuries to realize that this assumption was not unique and that a different axiom could provide useful insights into this world when it came to curved space time etc.

Does this mean the parallel lines axiom is false? Prof Briggs what do you think?

I don’t think so – truth depends upon your assumptions – if you assume it is true you get one set of ideas, assume another axiom true you get a different set of ideas.

Internal to themselves questions about truth or falsity don’t really apply.

Rather, when you apply these different axioms to reality sometimes one axiom is useful, sometimes another depending upon what you are doing.

I fail to see how this suddenly links into Aristotelian Theism, but Prof Briggs keeps insisting such a link exists – I await patiently!

Ok, I stand corrected…

An actually proper rigorous definition would be

“I haven’t the cluest of ideas of where this insights really come from, so I’ll call them revelation, seems sufficiently religious to stir the pot of the easily trolled”Honey bees have built in knowledge. They know how to navigate by the sun and can communicate with other bees to tell them the location and strength of food sources. This knowledge is coded in their genes, they don’t have to learn it. I’m not sure that it leads us any closer to some fundamental truth about what can or cannot be understood, or the nature of knowledge itself.

Seems to me you’re heading toward the ghost in the machine. Still, I’ve got another book to add to my reading list.

I’m beginning to think that philosophy is a bit like statistics, only instead of numbers, it’s the words that are tortured until they tell you what you want (or should that be need?) to know…

Chinahand,

Ah, I see your difficulty. There is a difference between logic and physics. The “parallel lines axiom” as you state it is an axiom of mathematics, of logical truth. It is not a statement how a particular physics will manifest itself. Just because a mathematical truth exists does not imply that it has a physical counterpart. It is also so that because the physical universe does not have parallel lines that the mathematically parallel lines are suddenly non-parallel.

Of course, people can guess that certain mathematical truths apply in this or that situation. But when the observations differ from the math, it does not mean the math is false. It just means that the deductive path that gave the mathematical theorem does not match physical observation.

You say “truth depends upon your assumptions”. Well, Stove has proved to us that we must have (at least) the assumed (or revealed) truth that we must know a prior that observations confirm or disconfirm propositions. The assumptions he uses to prove this (the proof I assume you now accept) are those that, for example, allow communication to take place. I’ll let you tell me which of these assumptions you say are certainly false.

Ok, so let’s parse this out.

What about by chance? What if there’s a bunch of philosophers trying out various ways of dealing with empirical claims and logic, and someone comes up with a straightforward “logical solution” by just trying them out and testing them out if they work or not?

Trial and error?

Because that last paragraph of mine looks just like an historical account of logic would look like. And after the “mistakes” of Zeno and Parmenides, someone should get ahold of some working logic.

Steve Crook,

This is an interesting objection, but it too fails. A bee can certainly have coded instructions to find its way to flowers and home again. Just as we can have them for some behaviors (like digestion). It’s possible we could even have coded instructions that say E = “observations confirm theories” (statistical algorithms work this way, after all). But we can’t

knowthe truth of E via coded instructions. There cannot be coded instructions which say “E is true”, to which we can call on and say, given our coded instructions, “It is true that ‘E is true'”. Knowing is an entirely different kind of thing.We’ll do more of this later, when Feser describes philosophy of mind.

Prof Briggs – I don’t think I have any argument with Stove – though I may have misunderstood! I am unsure what you mean when you ask me to tell you which of these assumptions I say are false.

I’m happy we have multiple a prior assumptions about ourselves; about how we behave in reality; and about reality.

We assume these things are true. If asked do we assume these things are true, we’ll say that is true – hence truth exists.

So? With these assumptions – this metaphysics – why should I believe in God?

Luis,

Tell you what. Forget about my words and think about Stove’s. Can you agree with his proof?

It depends on the exact definition of

experience. If by “experience” you also include the trial and error I spoke about, you don’t need the a priori anymore (because it just works).Analogy: darwinian evolution does not require anything but

experienceto create its “designs”. Competing ideas do not require “revelation”, they just need to compete. The working ones win. Each one idea will have been created in a brainstorming pool of competing ideas. You do not need a “revelation” here.If by “experience”, Stove is not including “Trial and Error”, then yes I agree wholeheartedly with the proof (seems quite fair to me).

Yes, you did use “infinity” as an example. Still, the above is not logical.

Nonecan be discovered by observation?The history of infinity: http://www.math.tamu.edu/~dallen/masters/infinity/infinity.pdf.

Does oneâ€™s mathematical experience counts as â€œexperienceâ€™? One needs quite a bit of mathematical experience to understand pure mathematics. No philosophy, empiricism or not, I’d very much like to see logical arguments.

Prof Briggs says:

Itâ€™s possible we could even have coded instructions that say E = â€œobservations confirm theoriesâ€ (statistical algorithms work this way, after all). But we canâ€™t know the truth of E via coded instructions. There cannot be coded instructions which say â€œE is trueâ€, to which we can call on and say, given our coded instructions, â€œIt is true that â€˜E is trueâ€™â€. Knowing is an entirely different kind of thing.

———

But rather than knowing it, surely we can assume it and see if this assumption fits with our biases and observations. Certainly it doesn’t guarantee anything, but such an attempt may be useful.

JH,

You misread. It is clearly not “none”.

Mr. Briggs,

A:”All truths cannot be discovered by observation”is not the same asB:”Not all truth can be discovered by observation,”is it? Does “Empiricism is falseimply A or B? I thought the answer is B.Yes,

pure(abstract) mathematics has nothing to do with physical existence and doesnâ€™t exist in natureâ€¦ and therefore some said it’s man-made. Soâ€¦ God has no physical existence. Now, how about defining a few things and putting forth some axioms, and premises? Let’s get to the big question. I also think one needs to show that God is not man-made; see also DAVâ€™s comments. Mathematicians (pure mathematicians, I mean) knows that theorems only hold within a set of constrains and premises, and probably would agree that abstract math is man-made.Let me try again. So yo can actually pinpoint the mistake I might have made in reading your statement.

All humans cannot live forever. = None can live forever.

Empiricism: Knowledge comes only (or primarily) from sensory experience. http://en.wikipedia.org/wiki/Empiricism

“Empiricism is false” implies that not all knowledge come from sensory experience.

Let me just put in plug for Australia’s greatest philosopher, David Stove.

Many of us downunder would like to emphasize Stove’s contribution, and we hope that you will excuse us for Peter Singer, who seems to have been released on the world inadvertently …

My son attends Singer’s old school – they don’t like to talk about him, and younger pupils are banned from reading his Wikipedia entry.

Briggs said: “Empiricism is false: all truths cannot be discovered by observation. Quite simply, even its defining statement is self-contradictory.”

This seems unnecessarily provocative. Richard Feynman, winner of the 1958 Nobel Prize in Physics and famous teacher wrote: “The principle of science, the definition almost, is the following: the test of knowledge is experiment.” So your statement could easily be construed as anti-scientific. That science is false.

Your example about infinity not being observed in Nature begs the question. You obviously assume infinity is built-in knowledge. But nonstandard analysis doesn’t require it. Science doesn’t require infinity. So I’m afraid there are those of us who are simply not going to make the necessary assumptions needed to prove the existence of God on consistency grounds. We find we only really need to make assumptions that fall short of that conclusion. Or the conclusion empiricism is self-contradictory.

“Axioms by definition are truths which cannot be proved. ”

Surely “Assuptions” rather than “Truths” it is not unknowm for an axiom to fall into the abyss of contradictory logic. Russels Paradox for example.

Most people can observe the outcome of a scientific experiment, but do they know what to make of it? Can the relevance of an experiment be observed?

If infinity is derived from observation, it must be thought of as something extremely large for it will rely on an extrapolation of relative sizes which is what we observe. However infinity cannot be divided into two equal parts.

Are you suggesting that the concept of infinity is sort of irrelevant?

@briggs @ 11:34

My point was more that the bee has innate knowledge, and that it is unreasonable to assume that we don’t. The question is, from where? I’d argue that it’s encoded in the genes and therefore in brain chemistry and structure. It got there via natural selection because it provides some sort of evolutionary advantage.

So, in this case we know the answer to be true because the problem fits a pattern that we are innately familiar with because evolution has run sufficient experiments for us have the answer built in.

All dogs know how to hunt and stalk prey. They can learn the finer points, but they have the basics built in. When presented with a hunting logic problem they already know the correct answer without having to analyse it.

Part of the problem with all of this is that we sometimes forget that we are animals, better, superior yes, but still animals. In a way, I think this is really the gift that Darwin gave us, the *opportunity* to see ourselves as part of something, admittedly at the top of the heap, but still part of something and not specially blessed and set *above* it.

“1p. 162-163. This book, especially the second half, is a treasure that all statisticians, probabilists, and logicians should read.”

60 quid though. 60 quid!

I see the point that whatever reason I give you, youâ€™d continue to demand a reason for the reason.

If probably can trace back step by step how I learned, e.g., the limit as x approaches 1 of (x^2-1)/(x-1) is 2. It would end up with the question of how I learned that 1+1 = 2. My naÃ¯ve answer is that I used my fingers and I have the ability to learn and think.

So I agree that experience is not enough based on my own

experience.I need to be able to think or perhaps I am indeed equipped with certain intuitive knowledge.But how do you jump to the conclusion that â€œwe

allof us come with built-in knowledgeâ€? Maybe Bolzano is equipped with it, but do we all?What is the â€œbuilt-in knowledge? Are we ALL come with it? I canâ€™t be sure based on the observation derived from my experience of raising two children and having taught thousands of students. Is this observation â€œknowledgeâ€? I donâ€™t think so. (You are welcome to convince me that I am a very knowledge person. ^_^)

Why do I have the ability to learn and think? Who knows? Perhaps someday, some scientist would find the answer, just as itâ€™s been discovered that some people have natural resistance (an ability) to HIV infection due to the absence of a receptor.

“We just know” is not a satisfactory answer for me.

@JH:

“Mathematicians (pure mathematicians, I mean) knows that theorems only hold within a set of constrains and premises, and probably would agree that abstract math is man-made.”

And many would not, given that some form of realism (e.g. Platonic extreme realism) is endemic in mathematics.

@George Crews:

“Your example about infinity not being observed in Nature begs the question. You obviously assume infinity is built-in knowledge. But nonstandard analysis doesnâ€™t require it.”

Non-standard analysis does not require infinity? I do not know exactly what you mean by this, but on a first reading you are simply wrong as to even construct a model of the nonstandard real line (and by Goedel to prove the consistency of nonstandard analysis) you have to appeal to strong choice principles, explicitly you need the Boolean prime ideal theorem to construct free ultrafilters. Or just note that by the simple laws or totally ordered fields, the inverses of infinitesimals are “infinitely large” numbers. I know of no axiomatic approach that does not assume more or less directly infinity (E. Nelson’s work, synthetic differential geometry, etc.), but given that you have linked to wikipedia, your knowledge of this matter is surely vast so maybe you can give me an example?

“Science doesnâ€™t require infinity.”

Once again I am not certain what you mean exactly by this, but given that separable infinite-dimensional Hilbert spaces and their algebras of operators are fundamental to QM (to give just one example), and they are about as “infinite” as it can be, most probably you are simply wrong.

Of course observation isn’t the only way to gain knowledge (er, Truth). Mathematical truths for example. But I don’t let math guide my life decisions (although I may use it as a tool through analogy). The most important knowledge is obtained through observation.

Yes. We seem to have built in knowledge — at least a form of deductive reasoning. We may be born with knowing “how” to walk but still must learn how to apply it. We are born with some knowledge of how to talk and possibly even know some rudimentary grammar but we still must learn how to apply it.

If optical illusions are a clue, we also have the propensity to form wrong conclusions when faced with inconclusive evidence — and rather strong beliefs in those conclusions to boot. Furthermore we have a propensity to believe things we wish were true.

If you’ve ever watched a master salesman then you’ve witnessed how easily a person’s belief and desires can be manipulated. Want a positive response? Nod your head “yes” while asking a question. Shake your head “no” for a negative response. If you apply these and similar actions when the issues are relatively unimportant, they can net in a desired belief or want because of the built-in — but not so infallible — logic that we possess.

If all of this is building toward discovering God through introspection, you are on shaky ground.

Endemic in mathematics? Or in people’s opinions about mathematics? Do not confuse the two. Mathematics is merely counting.

Is that sentence above also a product of inate knowledge, or “how the hell do you know that?”

“Is that sentence above also a product of inate knowledge, or â€œhow the hell do you know that?â€”

I don’t. I left out the “maybe”. There is some evidence that all human languages have a common subgrammar. Whether that’s because it’s built-in or because of cross-contamination with previous civilizations remains to be seen. The idea is just something I remember encountering long ago and I’ve lost the reference. I don’t know what became of the idea.

What is it about some people intimately engaged in precise quantitative subjects (e.g. math, statistics…) feel compelled to to engage in broad philosophical discussions on the vaguest, most generalized level?

So very yin-yang-ish…

Such philosophical constructs generally are relevant only in the imaginary realm of their construction…and, ultimately, generally are shown to be fatally flawed and so very impractical where very tangible real-world problems are confronted.

A reader posting on Amazon notes another philosopher’s observation of Stove’s basic failure in concocting the argument in this book:

“As explained well by Indurkhya (1990), Strove misrepresents the thesis that he claims to refute. Hume’s claim was that no matter how many objects we observe, we cannot conclude anything about unobserved objects. Stove’s formal version of this claim says that no matter how many objects we observe, we cannot conclude anything – period. Of course this straw man version is wrong, since we can certainly conclude something about the things we have already seen. But this is irrelevant to Hume’s claim, which is concerned with the things we have not yet seen. Duh!”

The philosopher’s real contrbution is taking such profoundly simple principles & repackaging them into something that seems erudite & profound. That may be a good exercise for some–Alan Greenspan’s abilty to expound at length while conveying very little comes to mind–so there is clearly some, if very limited, practical value in the musings of philosophers.

@Luis Dias:

“Endemic in mathematics? Or in peopleâ€™s opinions about mathematics? Do not confuse the two. Mathematics is merely counting.”

The context, that is, the sentence to which I was replying, should make it clear what I meant.

And no, mathematics is not “merely counting”.

It’s easy to get that impression at times. I’ve always treated those who insist on particular spellings of transliterated words (like “Quran” vs. “Koran”) or even insist on the original alphabet (like using Greek alphabet letters to spell “kinesis”) as affecting erudition. They act as if the words themselves transcend the concept. Window dressing.

Not saying any of this applies to Stove.

youâ€™re heading toward the ghost in the machine.Not if Dr. Briggs continues to follow Feser’s arguments. Feser is an Aristo-Thomist and the “ghost in the machine” foolishness is a consequence of Descartes’ scientificalistic approach.

If by â€œexperienceâ€ you also include the trial and error I spoke aboutHey! Let’s see if we can use that “empirical observation” thingie. Can we cite any actual observations to confirm that logic arose by trial and error?

Richard Feynman, winner of the 1958 Nobel Prize in Physics and famous teacher wrote: â€œThe principle of science, the definition almost, is the following: the test of knowledge is experiment.â€ So your statement could easily be construed as anti-scientific. That science is false.In what way does a mastery of physics — i.e., the metrical properties of material bodies — confer expertise in meta-physics — i.e., the a priori beliefs that underly the scientific program? You will notice that if Feynman’s Criterion is firmly applied, we could not know that the objective universe exists: any such observation requires a priori a belief that there is an objective universe to observe.

it is not unknowm for an axiom to fall into the abyss of contradictory logic. Russels Paradox for example.Which axiom fell into the abyss in that case? And what physical observations were involved.

Most people can observe the outcome of a scientific experiment, but do they know what to make of it?A good point. Facts are not self-demonstrating, and only carry meaning when interpreted in the light of a coherent theory. The self-same observation can be interpreted in multiple ways:

â€œTake two physicists who do not define pressure in the same manner because they do not admit the same theories of mechanics. One for example accepts the ideas of Lagrange; the other adopts the ideas of Laplace and Poisson. Submit to these two physicists a law whose statement brings into play the notion of pressure. They will hear the statement in two different ways. To compare it with reality, they will make different calculations so that

one will find this law verified by facts which, for the other, will contradict it.”— Pierre Duhem, â€œSome Reflections on the Subject of Experimental Physicsâ€ (1894) in Duhem,

Essays in the History and Philosophy of Science(tr. Ariew and Barker) Hackett Publishing, (Indianapolis and Cambridge 1996).+ + +

Mathematics is merely counting.Thanks. I needed a good laugh today.

+ + +

philosophical constructs generally are relevant only in the imaginary realm of their constructionâ€¦and, ultimately, generally are shown to be fatally flawed and so very impractical where very tangible real-world problems are confronted.Logic is impractical where very tangible real-world problems are confronted? Who knew? Maybe the “philosophical construct” that holds the material universe to be rationally ordered and accessible at least in large measure to human reason. That is soooo impractical.

Perhaps you are thinking of post-Cartesian philosophy.

@G Rodrigues

First, one divided by an infinitesimal is not exactly infinity, is it? But, to return the complement, “your knowledge in this matter is surely vast so maybe you can give me an example?” 🙂

Secondly, that some theory of science contains an assumption of “separable infinite-dimensional Hilbert spaces and their algebras of operators” does not mean that all scientific theories require a concept of infinity as a basic feature. That science requires infinity. That no future theory that covers the same phenomena will require the assumption of infinity. Science only requires consistency as a matter of logic. It doesn’t even require a theory to be true. (Not to mention that all the infinities in scientific theories I know about are always normalized away. And they are never actually observed.)

Ye Olde Statistician said:

“In what way does a mastery of physics â€” i.e., the metrical properties of material bodies â€” confer expertise in meta-physics â€” i.e., the a priori beliefs that underly the scientific program? You will notice that if Feynmanâ€™s Criterion is firmly applied, we could not know that the objective universe exists: any such observation requires a priori a belief that there is an objective universe to observe.”

I made an appeal to authority. So if you reject it for whatever reason, that’s fine. I was just crediting the quote’s author.

Yes, Feynman’s Criterion is an assumption. So the task is not to prove that the objective universe exists. We simply assume it. Science is a game. A practical game. A game that has shown it minimizes the consequences of our false beliefs about reality. If you don’t want to play, again, that’s fine. Really. We all await someone inventing an even better game. And that will never happen if no one tries.

@George Crews:

“But, to return the complement, â€œyour knowledge in this matter is surely vast so maybe you can give me an example?â€”

An example of what? You said that non-standard analysis does not require infinity. You probably have to explain exactly what you mean by this, but I objected to that statement on two grounds.

1. Since an infinitesimal e is an element that is smaller then every 1/n for every (standard) natural n, its inverse 1/e is greater than every (standard) natural n. To me, this counts as an infinitary element of sorts. The appeal to non-standard analysis is even more puzzling when the standard real line does not have such “infinitary” elements as it is Archimedean.

2. In order to construct the non-standard real line you have to appeal to a fairly strong choice principle (Boolean prime ideal theorem) and thus, to highly infinitary mathematics.

If you take an axiomatic approach and “bypass” the need to construct the non-standard real line in some background theory (ZFC say), then you have to smuggle in infinitary mathematics by other means; this is an elementary point of logic, although exactly how much one needs I do not know, you will have to ask that to the reverse mathematics guys.

“Secondly, that some theory of science contains an assumption of â€œseparable infinite-dimensional Hilbert spaces and their algebras of operatorsâ€ does not mean that all scientific theories require a concept of infinity as a basic feature.”

It is you who has to clarify what you mean by â€œScience doesnâ€™t require infinity.â€ I just pointed out that at least one well-established scientific theory requires highly infinitary mathematics. The rest of your talk about how Science this or how Science that is, quite frankly, nonsensical.

“Not to mention that all the infinities in scientific theories I know about are always normalized away. And they are never actually observed.”

Look, “infinite” has several different senses in mathematics. In the sense you are using in the quoted sentence, yes, they are never “observed” because no sense can be made of them and if some calculation leads to them, then it is a sure sign that something has gone wrong and one has to “normalize[d] away”, e.g. renormalization in quantum field theories. But this sense of infinite obviously clashes with what you said about non-standard analysis given 1. above. In summary, you are not making sense.

note: in 2. above I use infinite in the set-theoretical sense. The QM example is infinitary in this sense but it is also infinitary in yet another, different sense.

note 2: apologies for the dry, boring technicalities.

@G. Rodrigues

Sorry, let me try a different tack to get my point across. We can do science with potential infinities rather than actual infinities. Since science is limited to measuring things, anything we assume beyond rational numbers is not subject to possible falsification. (Our units of measure are in the denominator. Our biggest number is limited by our smallest ruler.) So why bother? We can, and must, get by with only approximate measurements of the square root of two. And we can’t divide a rational number by zero to get infinity. We could reintroduce Newton’s fluxions and still do Newtonian physics. Newtonian mechanics isn’t actually true anyway. So who cares? Whatever, makes things the simplest. Occam’s razor is another assumption of the scientific method. Eliminate unnecessary concepts. Error management–not error elimination. Being approximate is more than just OK. Science can’t have an actual infinity.

BTW, you are giving me a very easy time with this. IMHO, there is a much better abstract number to be arguing about being absolutely connected to reality, however reality is. The imaginary number: i = sqrt(-1). To keep theoretical consistency from getting fuzzy, I don’t see offhand how the abstraction can be made optional or approximate. Only numbers we can measure can be fuzzy. And it’s one number we can’t measure. We can get by with only the one imaginary number. If I were trying to abstractly prove God, that’s the abstract number I would focus on. It might give me an idea how to proceed.

“i = sqrt(-1). To keep theoretical consistency from getting fuzzy, I donâ€™t see offhand how the abstraction can be made optional or approximate.”

Frankly, I’ve always thought of it as a tag for the complex part of a number an not an actual number. For attitude control in spacecraft (an elsewhere) quaternions are sometimes employed since they extend the concept to three dimensions.

More of a convenience than a necessity. Multivariate statistics seems to do OK without it.

@George Crews:

“We can do science with potential infinities rather than actual infinities.”

Define what you mean exactly by “potential infinity” and then we can try to answer your question. Anyway, I see that you have dropped the non-standard analysis example, but also the sense of infinite you were using and reverted to the, essentially set-theoretical, notion of actual infinite. But as I have already said, in the current state of Science, you *cannot* get completely rid of infinitary mathematics, although ZFC is way way more than we need to, and we can trim down things to finite-order arithmetic and probably even get a very low upper bound for the order; something like 3 or 4? ask the reverse mathematics guys.

“Since science is limited to measuring things, anything we assume beyond rational numbers is not subject to possible falsification.”

I do not know what it means to say that some piece of mathematics is “falsifiable”. And “Science is limited to measuring things”? Let me guess, you have been reading about Science (capitalized and suitably hypostasised) in Wikipedia just like for non-standard analysis?

“We can, and must, get by with only approximate measurements of the square root of two.”

This is true, but not for the reasons you mention. For example, if we could make measurements with no error margin, there is a way to precisely measure say, a distance of square root of two (assuming we lived in a spatially Euclidean universe).

“And we canâ€™t divide a rational number by zero to get infinity.”

And? Are you now going back to the former sense of infinite?

“Science canâ€™t have an actual infinity.”

Since in its present state, the formal, mathematical background of some scientific theories requires it, your “canâ€™t have” is a tad strong dontcha think?

“To keep theoretical consistency from getting fuzzy, I donâ€™t see offhand how the abstraction can be made optional or approximate.”

I can make a guess at what you are trying to say by qualifying an abstraction as “optional”, but you lost me with “approximate”.

“Only numbers we can measure can be fuzzy.”

What does it mean for a number to be fuzzy?

“If I were trying to abstractly prove God, thatâ€™s the abstract number I would focus on.”

Huh? Seriously, you are way over your head, you do not know what you are talking about and throw away with gleeful abandonment hastily cobbled sentences in the off-chance that your interlocutor is in a charitable mood and manages to read into them something other than empty verbiage. Since I have little patience for this type of discussion, with a bow, I take my leave.

Science is a game… that has shown it minimizes the consequences of our false beliefs about reality.In what way did science minimize the consequences for the people of Nagasaki by reaching a true belief about nuclear fission?

The only false beliefs that natural science minimizes are beliefs about the metrical qualities of material bodies. It says nothing about justice, truth, beauty, art, etc. except insofar as an artist may employ pigments or a pianist vibrating strings. But to know the physics of vibrating strings does not tell us much of importance about the

Moonlight Sonata.“Whereof we cannot speak thereof we must be silent”.

Infinity is a concept that we can sort of understand by observation. Time seems to never end. Never ending! The history of infinity indicates that we had long accepted the concept; however, mathematicians have tried define it in a way that can be used in all mathematics.

It would be great if someone can use an example such as â€œtime seems to never endâ€ to plant the idea/concept of â€œGOD.â€ Then, perhaps, define what it is. No proof of its physical existence required. You can define GOD as infinity. Fine. Iâ€™ll take it as you defined. Infinity! Not omnipotent!

If you want to prove that GOD exists or potentially exists, please at least define what it is for me.

G. Rodrigues said: “Huh? Seriously, you are way over your head, you do not know what you are talking about and throw away with gleeful abandonment hastily cobbled sentences in the off-chance that your interlocutor is in a charitable mood and manages to read into them something other than empty verbiage. Since I have little patience for this type of discussion, with a bow, I take my leave.”

Off chance? Actually, charity is exactly what is expected. Quoting from the Wikipedia:

In philosophy and rhetoric, the principle of charity requires interpreting a speaker’s statements to be rational and, in the case of any argument, considering its best, strongest possible interpretation.[1] In its narrowest sense, the goal of this methodological principle is to avoid attributing irrationality, logical fallacies or falsehoods to the others’ statements, when a coherent, rational interpretation of the statements is available. According to Simon Blackburn[2] “it constrains the interpreter to maximize the truth or rationality in the subject’s sayings.”

So I find your lack of any attempt to actually understand what I’m saying and your striking lack of decorum unhelpful to the discussion. This is not a debate. We’re supposed to be on the same side.

Since this is a statistics blog, let me add that E.T. Jaynes once wrote: “In any field, the Establishment is seldom in pursuit of the truth, because it is composed of those who sincerely believe that they are already in possession of it.”

DAV,

I think there are differences between ability and knowledge. In Chinese, there is so called innate ability, and â€œinnate knowledgeâ€ is an oxymoron since â€œknowledgeâ€ is understood as something we have learned. So â€œGod Existsâ€ is generally not seen as a piece of â€œknowledge,â€ itâ€™s a belief. Well, we are speaking English here.

Some readers might appreciate the following.

Terry Tao, a great researcher and expositor of mathematics, is often compared to Einstein by mathematicians. A Genius he is. His view on mathematics:

http://terrytao.wordpress.com/career-advice/does-one-have-to-be-a-genius-to-do-maths/

“My view is that mathematics is primarily a language for modeling the physical world, or various abstractions of the physical world(*). So in one sense it is purely formal, in much the same way that English is a formal combination of letters of the alphabet. On the other hand, as our understanding of mathematics improves, our models fit the physical world better (both in terms of predictive power, and in terms of agreement with physical intuition) and so the mathematical objects we study begin to more closely resemble physical objects, though of course they are never actually physical in nature. It is certainly helpful though, when trying tocreate new mathematics, to think of mathematical objects as being analogous to physical objects; for instance, a mathematical object may â€œobstructâ€ another mathematical operation from taking place, and thinking about obstructions is a very useful way to make progress in mathematics.(*) The physical world generally refers to tangible objects, but one can also consider abstractions of these objects, abstractions of abstractions, and so forth. For instance, a childrenâ€™s ball is a physical object; it might be red. The property of â€œrednessâ€ is then an intangible abstraction, but still physical. The phenomenon of â€œcolourâ€ is then an abstraction of an abstraction, but again still physical; the concept of a â€œsenseâ€ is a yet further abstraction; and so forth. Somewhat analogously, mathematics tends to start with â€œprimitiveâ€ objects such as numbers or points, then moves up to sets, spaces, operations and relations, then functions, then operators, (and then functors and natural transformations, in category theory), etc. [It’s true that in set theory, all of these mathematical objects can be described as sets – much as all parts of speech in English can be described as strings of letters – but this is only one of many equally valid interpretations of these objects, and not one which corresponds perfectly to physical intuition.]”JH,

Interesting. What does “ability” really mean? Does it mean “skill” (an ability which must be learned) or does it mean something more like an inherent property (like being able to fall down?) that doesn’t have to be learned?

I think we learn to do much of what we can do and are born mostly with the ability to learn. There are certain things that seem innate: the ability to move muscles but we must learn to coordinate them, possibly some rudimentary grammar skills, and of course the ability to learn come to mind.

So, technically I guess I agree with the the Chinese definition if it means an inherent/fundamental property that doesn’t have to be learned — at least as it applies to humans.

It is. It’s an extraordinary list of the different ways / methods of counting. But it is just counting. You couldn’t have maths without the proverbial “2 apples plus 2 apples are 4 apples”. And that’s not exactly “pure rational thinking” to me.

@Luis Dias:

“”And no, mathematics is not â€œmerely countingâ€.”

It is.”

Sigh.

We can keep nay-saying each other until Hell freezes over; so instead, I will open a book on general topology and quote what is probably the most important theorem of the field:

Theorem (Tychonoff): the product of any family of compact spaces is compact.

Now explain to me what that has to do with counting.

“You couldnâ€™t have maths without the proverbial â€œ2 apples plus 2 apples are 4 applesâ€.”

What exactly is the relevance of this observation? Even if I granted you that arithmetic is the ground of all mathematics, both historically and logically speaking, this no more implies that mathematics is “merely counting” as the fact that the vast majority of mathematics can be formalized within ZFC is proof that all mathematics is “merely set theory”.

“And thatâ€™s not exactly â€œpure rational thinkingâ€ to me.”

You will have to clarify what you mean by this.

Pingback: Reasoning To Belief: Feser’s The Last Superstition: A Refutation of the New Atheism — Part III | William M. Briggs

Pingback: Reasoning To Belief: Feserâ€™s The Last Superstition: A Refutation of the New Atheism — Part II | William M. Briggs