Posted by: Michael | 08/02/2008

A great piece on Science and Religion

I just read a great comment by Leonard Ornstein on Peter Woit’s blog, Not Even Wrong. Leonard Ornstein comments on the post Is Big Physics peddling science pornography?

Please go read.

Don’t brother to go Woit pulled the post; and Woit being Woit its beneath him to explain.

I owe Peter an apology, as can be seen below.

Advertisements

Responses

  1. Mike:

    Peter was polite about pulling my post, and he told me about your response.

    I thought it was on topic, but he doesn’t.

    So I’ll post it here. I presume you don’t mind:

    Science in the Spectrum of Belief: It’s All About Models and Weighted Skepticism

    Mankind entertains a wide range of beliefs. Public and private educational systems, and science itself, generally fail to teach just where, and how, science falls within this spectrum of belief.

    Science focuses on beliefs about ‘reality’. It tries to generate order from – and to reduce the uncertainties generated by – the welter of mental ‘images’ we develop about both the natural and man-made worlds that we explore with our senses. Science is steered by a generally skeptical attitude. But skepticism doesn’t equal pessimism. Depending upon the degree of uncertainty, and on the problem at hand, a skeptic can be pessimistic, agnostic – or even optimistic. The way science and its skepticism fit into the wider landscape of beliefs will be reviewed briefly. Much of this story should be persuasive.

    Belief and Deductive Reasoning

    The application of conventional languages (and such special ‘languages’ as logic and mathematics) to the description of our mental images is the source of most shared knowledge, including that of science. (Instrumental music and non-representational art, are two ‘unconventional languages’ encompassing rather different kinds of shared experience.)

    Conventional and logical languages can help their users to cooperate to achieve some common objective, only if the users commit themselves to the discipline of ‘trust’ in, or ‘belief’ in a set of ‘fundamentals’. The commitment, by convention, is to consistently adhere to a fundamental set of rules and definitions, in order to reach shared goals. When the axiomatic rules are followed fairly consistently, languages make it possible to model and exchange some, but not all, information about our mental ‘images’. Typically, information about only very few details of personal visual images are exchanged or are stored in memory for later use. Our sense experiences are very much richer than those we verbalize. And we use many sense experiences automatically and subconsciously. Thus, models end up being heuristic sketches; they seem never to be identical to, nor as detailed as our ‘transient’ mental images.

    The fundamentals are used to build the models by carefully following the rules. The discipline of explicit (spelled-out) or implicit (not stated, but implied) commitment to rules and definitions (that are themselves, either spelled-out, or implied) is essential, otherwise, the Tower-of-Babel-effect would prevail; no one would understand anyone else; communication would fail.

    Models about the world and humans, derived from our sensations of ‘reality’, are the ‘descriptions’ of conventional language; they’re typically individual sentences – or collections of logically connected sentences – word ‘pictures’ of varying complexity and precision. They’re constructed with the axiomatic fundamentals (defined words or symbols, and grammatical and syntactic rules). Such models are the conceptual tools that we use for communication, analysis and discussion.

    By following the rules of deductive reasoning, a model (in mathematics, a conjecture) can be proven either to be true – to be a logically consistent tautology that follows from adhering to the rules (a theorem) ; or false – to conflict with the rules; or undecidable – if the model, or its set of axiomatics, is incomplete or poorly-specified (note: undecidable is more inclusive and general than K. Popper’s “falsifiable”).

    The commitment to the axiomatic fundamentals, themselves, is equivalent to taking them as ‘true’, although unprovable – usually in principle, as part of a convention, or because no more primordial base from which they might be derived has been found (otherwise they would be merely ‘theorems’, derived from those more primordial axioms).

    Even with good intentions, in the arguments of ordinary conversation, undecidable conjectures may constitute a considerable portion of verbal exchange. This is due to marginal attention to details of consistency (leading to incompleteness of argument), to the ambiguity of simile and metaphor and to the regular use of always-incomplete, factual definitions derived by inductive reasoning (to be discussed below).

    But in the deductive reasoning of math and formal logic, great pains are taken to evade incomplete definitions and poorly-posed arguments and therefore to avoid undecidable conjectures. Until 1931, it was widely believed that, with great care, incompleteness in deductive reasoning always could be avoided. Then Kurt Gödel showed it’s not possible to guarantee completeness for other than the most simplistic axiomatic systems. Although he shook the foundations of mathematics and formal logic, his Incompleteness Theorems have impact on the truth of only very few of a vast number of proven theorems.

    When the commitment to fundamental axiomatics is implicit, the commitment either is to rules and definitions which we’ve learned so well that their application has become automatic and subconscious (as in most conversation). Or it’s derived from other inherited, built-in, neural, language-enabling, ‘intuitive’ and subconscious mechanisms that have evolved ‘because’ they’ve enhanced human survival. Since implicit commitment depends upon an automatic, physiological base, it may be somewhat less reliable than conscious,‘willful’, explicit commitment.

    But all such commitment, explicit or implicit, has the important feature, shared with a religious act of faith; we accept the axiomatic rules without logical proof of their truth. In this quite general sense, all deductive reasoning and the “proving of absolute logical truths”, is ultimately faith-based.

    Belief and Inductive Reasoning

    In the sixteenth century, David Hume taught that some uncertainty about the truth of ‘facts’ always must persist because of the unavoidable incompleteness of observation:

    Since we can never have observed all of the past, all of the present – let alone the future, there’s no way (to use axiomatically-based deductive reasoning) to justify absolute belief in any facts. With inductive reasoning, we try to establish a complete definition of an object, class or process, by extrapolation or interpolation from observation of only a sample of its parts. But it’s not logically possible to guarantee that the definition will also be true for any unobserved parts. This necessary logical incompleteness of all empirical inductive reasoning therefore must be an inescapable source of residual uncertainty.

    Despite this, we all ‘believe in’ the ‘truth’ of many facts. This belief necessarily depends upon a commitment to a kind of ‘faith’ in the ‘truth’ of some conjectures that, in the strictest, logical sense, are deductively undecidable because of the obvious and necessary incompleteness of the supporting evidence.

    Science and Faith

    Science depends completely on both deductive and inductive reasoning. Therefore the argument that science dispenses with the need for any elements of faith or belief – supposedly in contrast to religion – is overly simplistic. Since faith and belief are defined as metaphysical, this argument also is equivalent to the questionable claim of the extensive literature of the Positivists; that science is, and must remain, free of metaphysics.

    But such arguments are really just straw men; they have nothing to do with the important differences.

    Degrees of Belief Across the Spectrum

    Disciplines can differ considerably with respect to the degree of required commitment. Most theists are enjoined to commit to unchallengeable, absolute faith in at least the ‘fundamentals’ of their religion – as are the slaves to some political and economic ideologies. At another extreme, Cultural Relativists and most Post Modernist claim to be committed to undiluted skepticism about all that others believe is fundamental. They argue all ‘truths’ are suspect because of the necessary biases of local, cultural points of view and they seem to reject the possibility of any useful global points of view.

    Weighted Skepticism: The Core of the Scientific Method

    Science is strikingly different. Whereas Gödelian incompleteness directly effects the logical truth of only a few of the deductive models of math, logic and theoretical science, Humean incompleteness effects the ‘truth’ of every inductive fact, including all those of experimental science. This might mistakenly seem to give some special priority to deductive logic. But science needs them both.

    Science requires the commitment to fundamental rules, in order to design robust models of reality (by carefully following the rules) so that when communicated, those models may be universally understood. But science always adds the injunction: a degree of belief in each model can be established only with the weight of supporting, factual evidence; the more and the better the evidence, the greater the belief. This attitude is not unique. It shares its origin with general,‘common sense’ attitudes about how varying strengths of evidence support or refute ideas. However, it’s the important distinction between science and religion – and even between science and mathematics. And it helps to assure that all proposed or established models (heuristics) remain distinguishable from the ‘reality’ they attempt to simulate.

    External or internal (biological) worlds, are accessed through our senses – either directly, or ‘through’ intervening ‘instruments’. All the models (ideas, hypotheses, theories, laws), constructed by theoretical scientists to generate order out of such observations – preferably, but not necessarily, carefully reasoned – start with a conjecture about some ‘stand-in’ for some observable(s), as an operational (algorithmic) recipe, and often end with a proven theorem.

    Nonetheless, all remain agnostically tentative guesses about the nature of reality. To inspire any degree of belief that a model ‘explains’ such reality, science requires separate evidence that the model, to some significant degree, matches some previously unobserved (empirical) aspect of those worlds. That’s to say, to receive anything more than the most tentative consideration, most apparent deductive truths about the ‘world’ must be supported by (usually ‘new’) matching, empirical, inductive ‘truths’. Prior similar evidence is largely discounted as support, if it has contributed to the abduction and construction of the model. Note that such inductive support is never required for establishing belief in the purely deductive proof of a typical theorem in mathematics or formal logic.

    The number of provable mathematical conjectures (consistent models; theorems) is enormous. But the fraction of those that can be matched to worldly observations is infinitesmal. Theoretical scientists sometimes discover, and in any case, typically use this tiny fraction of mathematics as extremely valuable tools to construct models. Experimental scientists often use some fraction for designing experimental tests of models and regularly to measure impact of experimental observations on consequent levels of uncertainty. But it’s a mistake to consider most of mathematics as a kind of science. Most of the time, mathematics avoids logically-undecidable models, and therefore needs no empirical matching and testing. It ‘gets away with’ “pure reason” to prove absolutely true theorems. Science, as almost universally practiced, can’t! It ‘must’ tie all of its models to messy, logically-undecidable facts – and always ends up with at least some residual uncertainty.

    To replace such qualitative terms as “a few”, “some” and “most”, measures of degree of match – for example, “confidence intervals” or “margins of error” – probability-like weights – now often are used to provide a numerical scale for quantifying degrees of belief in facts. Such weights vary, for example, between 0 to 100%, (or 0 to 1). Belief, as ‘used’ in science, is effectively analog – shades of grey. Belief, in the non-scientific disciplines, is more typically binary – black and white.

    Empirically well-established models, like the Central Limits Theorem, are used to compute the weight of evidence from measured variations among – the number of – and measures of how ‘representative’ are – repeated sample observations. Careful accumulation of evidence, slowly but inevitably, reduces residual uncertainty – with one reservation:

    An earth-shaking, quantitative caveat with respect to our understanding of inductive uncertainty was provided by Werner Heisenberg in 1926. He noted that the axiomatic ‘conceptual entanglement’ of the definitions of certain pairs of what are now called non-commuting, canonically conjugate observables, (that are the Fourier transforms of one another; like, in mechanics, the position and momentum of a particle; or in spectroscopy, energy and time interval), guarantees that the more confidently we establish the measured magnitude of one of the pair, the more uncertain we must become about the other. This Uncertainty Principle has had an enormous impact on our understanding of microscopic and high-energy observations, and lies at the foundation of quantum theories of physics. It got us around some of the singularities, the intractable ‘infinities’, of the attractions and repulsions of the classical field theories, as particle separations approach zero. Paradoxically, it’s permitted some of the most precise (most certain) measurements so far made by physical science!

    Because of the already-discussed, unavoidable, sampling incompleteness in attempts to establish facts, it follows that calculated degrees of belief may only approach, but never equal, 0 or 1 (or 100%). Belief that flows from experimental science (and from observation, generally) can sometimes be very strongly supported by evidence. But no amount of evidence can ever provide absolute certainty about any model’s match to reality. This residual uncertainty distinguishes the ‘factual truths’ of scientific disciplines from the ‘absolute’ truths of mathematics, formal logic, religions and the claims of many other non-scientific intellectual disciplines.

    Challenging the pervasive, black-and-white mind-set of typical reasoning, inherent uncertainties of conversational deductive reasoning, and of all inductive reasoning, set limits to certainty in all intellectual disciplines (e.g., religion, law, history and science). Among these disciplines, and despite the uncertainties, only the weighted, skeptical, agnosticism of the scientific approach, using these two reasoning tools, continues to discover, and ever more firmly establish, increasing numbers of empirical ‘truths’. These ‘truths’ include models of the world, its parts and its processes that have been demonstrated to deserve the very highest levels of confidence. Some even have been promoted as probable “Laws of Nature”. Such well-established models are our most valuable tools for reducing worldly uncertainty.

    Yet-to-be-established- and Pseudo-Science

    Even the most carefully constructed models remain indistinguishable from good science fiction (tantalizing and artfully constructed “just so stories”) – until they’re matched and confirmed by experiment or other pertinent empirical observations that support some significant degree of confidence. With this mindset, scientists generally have a low degree of belief in unconfirmed speculative models about ‘reality’. Searches for extraterrestrial intelligence (SETI), stimulated by very weakly-supported hunches and ‘hopes’, and for the Higgs boson, ‘predicted’ by the Standard Model of particle physics – both ‘awaiting’ confirmation – currently fall in this science-fiction-like category. Individual scientists may differ considerably with respect to how much they might be willing to bet that such models will be confirmed. This usually depends upon the level of pertinent experience and training, and how well each believes such a model fits in with already well-confirmed, scientific models. Most physicists support the Large Hadron Collider (LHC), at least in the hope of demonstrating the Higgs; many fewer scientists show interest in SETI.

    In instances of pseudo-science, or of myths about demonstrably false models (like those of homeopathy and astrology), it’s the responsibility of scientists to ‘teach’ the public how they can be distinguished from ‘established’ science. We should expect the tolerance of scientists to be especially low for models that, as a consequence of the way they conflict with those models already most confidently established, appear to concern not only unobservable, but unconfirmable entities – like perpetual motion machines, creationism, and most likely, multiverses and anthropic landscapes.

    Confidence usually builds with the accumulation of increasing amounts of supporting evidence. Therefore, it’s not unreasonable for confidence in an unconfirmed model to diminish (for scientists and nonscientists), the longer it takes to present plausible, confirmatory evidence for a theory; and especially the longer it takes to supply even descriptions of how evidence, for, or against it, might be obtained. Some scientists are concerned that public support for science and it promise for further contributions to civilization – and especially for high-cost, high-profile science – could be placed at some jeopardy by repeated hype coupled with undelivered, promises. This can provide motivation to inform the public, with some vigor – but preferably without hype – with the small, but finite risk of nonetheless turning out to be wrong – about the uncertainties of such unconfirmed, and perhaps unconfirmable models (for example, string theories of particle physics).

    What Does Science Have to Offer?

    As time passes, in a stumbling manner, science increasingly explores many accessible details, from among the enormous numbers of sensed experiences previously ‘ignored’. At the same time, it develops new instruments to sense new detail beyond our own acuity. And it develops other instruments to explore whole new ‘territories’ that we’ve never experienced.

    Much of the useful technology of applied sciences (like engineering and medicine) carefully exploits the scientific knowledge base by using it to design and ‘construct’ civilization’s most reliable and tangible fruits and tools (themselves ‘models’), as well as for predicting new risks – and unfortunately, for creating a few unplanned risks.

    In the face of worldly uncertainties, science helps humans to cope with survival with a generally optimistic self-confidence as a result of:

    the widely-valued technological fruits of science,

    the utilitarian promises of, and the intellectual fulfillment arising from, the continuing drive to further reduce uncertainty and to discover new empirical ‘truths’ about the world, and

    the general social tolerance engendered by the mind-set of weighted confidence and weighted skepticism.

    These are science’s gifts to civilization.

    Careful evaluation of uncertainty is important. For many purposes, 99.99% certainty of avoiding failure may appear to be ‘practically’ indistinguishable from absolute reliability. But it depends upon the magnitude of risks. Neither with respect to a bridge, an airplane – nor some political decisions – is even 99.99% an acceptable level of certainty. And it’s only the method of science that shows us how to do better.

    If democratic ships of state are to be navigated safely and effectively, it’s essential for populations and their leaders to be educated to a broad understanding of how to use science to manage risks and uncertainty – and how science fits within the spectrum of beliefs.

    But when estimating real-world risks and rewards, unchallengeable religious or ideological beliefs are very poor substitutes for the weighted skepticism of science. This is one of the best reasons for the very strictest separation of church and state.

    The value of reducing uncertainties ties in closely with beliefs about survival value and ethical values. The complex way that survival and ethics fit into the spectrum of belief is still another story, neglected here for ‘brevity’.

  2. “The fundamentals are used to build the models by carefully following the rules. The discipline of explicit (spelled-out) or implicit (not stated, but implied) commitment to rules and definitions (that are themselves, either spelled-out, or implied) is essential, otherwise, the Tower-of-Babel-effect would prevail; no one would understand anyone else; communication would fail.”

    This is a key point, but needs a lot more analysis because of arguments over what the fundamentals really are. In order to make incremental progress, it’s true that you build on existing assumptions. However, radical progress usually involves (by definition) changing fundamental concepts in such a way that the way facts of nature are interpreted changes. For example, general relativity describes accelerations as results of a curvature in spacetime, and treats all accelerations classically as truly differential increases in velocity, not the sum of a lot of individual graviton interactions from a quantized gravitational field. General relativity has been tested in various ways, and confirmed very accurately on certain scales (the small positive cosmological constant needed on the largest scales was however an ad hoc modification, not a prediction, and general relativity has not been tested on quantum scales).

    So should the acceptance of general relativity be a universally agreed upon axiom for all future progress, or not? Regarding the tower of babel, this kind of foundational problem is one of the key issues for modern physics.

    Tony Smith quotes the problems which Feynman had at the Pocono Conference in 1948, where leading physicists Teller, Pauli and Bohr all dismissed Feynman’s work. See http://www.valdostamuseum.org/hamsmith/goodnewsbadnews.html

    “… My way of looking at things was completely new, and I could not deduce it from other known mathematical schemes, but I knew what I had done was right.

    … For instance,

    take the exclusion principle … it turns out that you don’t have to pay much attention to that in the intermediate states in the perturbation theory. I had discovered from empirical rules that if you don’t pay attention to it, you get the right answers anyway …. Teller said: “… It is fundamentally wrong that you don’t have to take the exclusion principle into account.” …

    … Dirac asked “Is it unitary?” … Dirac had proved … that in quantum mechanics, since you progress only forward in time, you have to have a unitary operator. But there is no unitary way of dealing with a single electron. Dirac could not think of going forwards and backwards … in time …

    … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle …

    … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further.

    I gave up, I simply gave up …”.

    – “The Beat of a Different Drum: The Life and Sciece of Richard Feynman”, by Jagdish Mehra (Oxford 1994) (pp. 245-248).

    Feynman’s idea was explained to Oppenheimer by Dyson, who had no time for new ideas from youngsters and was abusive towards Dyson until Bethe intervened on Dyson’s behalf, as Dyson explains in an interview.

    Tony Smith also mentions on his page http://www.valdostamuseum.org/hamsmith/ecgstcklbrg.html the work of Ernst Stückelberg who came up with Feynman’s key ideas about 5 years earlier, but had them rejected by the Physical Review in 1943.

    Another example is George Zweig, whose quark model called Aces was rejected by Physical Review Letters. He stated:

    ‘Getting the CERN report [on the discovery of quarks] published in the form that I wanted was so difficult that I finally gave up trying. When the physics department of a leading university was considering an appointment for me, their senior theorist, one of the most respected spokesmen for all of theoretical physics, blocked the appointment at a faculty meeting by passionately arguing that the ace [quark] model was the work of a “charlatan.” … Murray Gell-Mann [co-discoverer with Zweig of quarks/aces] once told me that he sent his first quark paper to Physics Letters for publication because he was certain that Physical Review Letters would not publish it.’

    – George Zweig, co-discoverer (with Murray Gell-Mann) of quarks, quoted on page 95 of John Gribbin’s, In Search of Superstrings: Supersymmetry, Membranes and the Theory of Everything, Icon Books, Cambridge, England, 2007.

    It’s unsurprising that after Feynman’s experience at the Pocono Conference of 1948, with ignorant attacks from a consensus of the top physicists who were all certain he was wrong, Feynman later went on to write things like:

    ‘Science is the organized skepticism in the reliability of expert opinion.’ – R. P. Feynman (quoted by Smolin, The trouble with physics, 2006, p. 307)

    and

    ‘Science is the belief in the ignorance of [committees of speculative] experts.’ – R. P. Feynman, The Pleasure of Finding Things Out, 1999, p187.

    One problem relevant to the tower of babel (people using different assumptions) is that it is vital for people to explore different possibilities and different types of mathematics in order to overcome groupthink:

    ’Groupthink is a type of thought exhibited by group members who try to minimize conflict and reach consensus without critically testing, analyzing, and evaluating ideas. During Groupthink, members of the group avoid promoting viewpoints outside the comfort zone of consensus thinking. A variety of motives for this may exist such as a desire to avoid being seen as foolish, or a desire to avoid embarrassing or angering other members of the group. Groupthink may cause groups to make hasty, irrational decisions, where individual doubts are set aside, for fear of upsetting the group’s balance.’ – Wikipedia.

    ‘[Groupthink is a] mode of thinking that people engage in when they are deeply involved in a cohesive in-group, when the members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action.’ – Irving Janis.

    Sharing the same beliefs in the validity of certain mathematical systems for dealing with quantum gravity and sharing the same interpretative assumptions like dark energy, is a step towards groupthink. Moving in the other direction, of course the Tower of Babel problem occurs.

    “But when estimating real-world risks and rewards, unchallengeable religious or ideological beliefs are very poor substitutes for the weighted skepticism of science.”

    I agree that skepticism is vital. But it is all too easy to corrupt scientific skepticism. Take mainstream string theory. Peter Woit wrotein 2002:

    “For the last eighteen years particle theory has been dominated by a single approach to the unification of the Standard Model interactions and quantum gravity. This line of thought has hardened into a new orthodoxy that postulates an unknown fundamental supersymmetric theory involving strings and other degrees of freedom with characteristic scale around the Planck length. […] It is a striking fact that there is absolutely no evidence whatsoever for this complex and unattractive conjectural theory. There is not even a serious proposal for what the dynamics of the fundamental ‘M-theory’ is supposed to be or any reason at all to believe that its dynamics would produce a vacuum state with the desired properties. The sole argument generally given to justify this picture of the world is that perturbative string theories have a massless spin two mode and thus could provide an explanation of gravity, if one ever managed to find an underlying theory for which perturbative string theory is the perturbative expansion.” – Quantum Field Theory and Representation Theory: A Sketch (2002), http://arxiv.org/abs/hep-th/0206135.

    ‘Actually, I would not even be prepared to call string theory a ‘theory’ … Imagine that I give you a chair, while explaining that the legs are still missing, and that the seat, back and armrest will perhaps be delivered soon; whatever I did give you, can I still call it a chair?’ – Nobel Laureate Gerard ‘t Hooft [Quoted in PW’s book ‘Not Even Wrong’, 2006]

    In his book ‘Not Even Wrong: The Failure of String Theory And the Search for Unity in Physical Law’, Woit explains that there are many “known unknowns” (to use Donald Rumsfeld’s popular phrase) in modern physics that are real problems which need to be addressed. E.g., a theory to explain the values of the parameters describing mass needed in the Standard Model. None of these problems are actually addressed by string theory, which builds upon, instead, speculative unknowns or “unknown unknowns” like Planck scale unification guesswork.

    So string “theory” is just like Wolfgang Pauli’s “empty box” (which is printed on the right hand side of the page here: http://www.americanscientist.org/template/AssetDetail/assetid/18638/page/2#19239 ).

    The very fact that string “theory” is being hyped and needs to be countered by Woit, proves that we live in an extremely pseudoscientific age with regards to mainstream ideas being hyped. Woit points out that there is no problem with scientists pursuing whatever they want, including extra dimensional theories like “string” which as yet predict nothing checkable and have not been shown to even reproduce the Standard Model.

    What’s wrong is for people to falsely hype such things with misleading claims. Penrose in his “Road to Reality” (2004) criticised Edward Witten’s hyped claim that:

    ‘String theory has the remarkable property of predicting gravity.’ – Dr Edward Witten, M-theory originator, Physics Today, April 1996.

    Witten in 2006 wrote:

    ‘The critics feel passionately that they are right, and that their viewpoints have been unfairly neglected by the establishment. … They bring into the public arena technical claims that few can properly evaluate. … Responding to this kind of criticism can be very difficult. It is hard to answer unfair charges of élitism without sounding élitist to non-experts. A direct response may just add fuel to controversies.’ – Dr Edward Witten, M-theory originator, Nature, Vol 444, 16 November 2006.

    He suggested that string “theorists” should not respond directly to critics, for fear of adding fuel to controversies by sounding elitist. What they do instead of responding to criticisms, is to spew out more hype claiming to “explain” to the ignorant that their non-checkable spin is science. The underlying message is that string “theories” have no other way available to defend uncheckable abject speculation than to be elitist and patronising, i.e., to say they are right because they are the mainstream elite and any critics are simply ignorant, confused, or haters of science. Most people accept what they are told by a committee of experts like a group of top string “theorists”.

    “The value of reducing uncertainties ties in closely with beliefs about survival value and ethical values. The complex way that survival and ethics fit into the spectrum of belief is still another story, neglected here for ‘brevity’.”

    That’s a pity, because this is key to understanding why a group of alleged scientists are using physics as a substitute for religion.

  3. nige cook:

    I agree with everything that you’re trying to convey – except about what I was trying to convey 😉

    You missed my point about “fundamentals”. It’s about axiomatic systems of conventional languages, math and formal logic – the most primitive set of fundamentals that must be agreed to to even begin communication. It was a discussion of ‘pure’ deductive reasoning. It tried to exclude “facts of nature”. Facts of nature are covered under inductive reasoning.

    I point out later that the MODELS of science (which definitely are NOT fundamentals) should be “preferably, but not necessarily, carefully reasoned” – supporting the idea that Feynman’s rejection by Teller, Pauli and Bohr was, in principle, ‘unscientific’. In at least Teller’s and Pauli’s cases, that was probably because they were more nearly radical Platonists. And Platonists, in the pejorative sense, are more likely to “use physics as a substitute for religion”.

    Of course, my post above, was hardly brief. But an essay, that I’m near ‘completing’, that includes a treatment of ethics, politics and golden rules in relation to science, runs some 50 pages!

  4. I downloaded Len’s article as soon as I read it on Woit’s site because I knew it wouldn’t be there later. I was happy to find it here, it deserves a full hearing.

  5. Correction:

    David Hume, 1711-1776, lived in the eighteenth – not “sixteenth” century.

  6. Hi! I was surfing and found your blog post… nice! I love your blog. 🙂 Cheers! Sandra. R.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: