Objective Morality: An Empirical Physicalist Approach (Pt. 2)

Free will, determinism, and moral responsibility.

Image result for group of chimpsAs a pragmatic, empirical physicalist, I try to take an evidence-based and reasoned approach to morality. In the first part of this series, I claimed that objective morality is a principle that operates exclusively at the level of species survival.

But how can morality ever be objective? After all, isn’t something that’s seen as moral in one society repugnant to another?

The dictionary defines “objective” as “not influenced by personal feelings or opinions in considering and representing facts.” We could extend that by saying “personal feelings or codified societal/cultural norms.”

If we want to build a case for an objective morality, we need to avoid personal/societal opinion and stick to facts. But how do we know what are “facts?”

The dictionary defines a “fact” as “a thing that is indisputably the case.” This strikes me as too vague for our purposes so let’s try to make it clearer. Objective facts are measured by reference to the actual real universe. So when we say something is “objective” or “factual” or “true,” it is by comparing a claim to an observation of the real universe. That’s the only way to be objective; there is no other.

By “objective morality” then, I mean “behavior that can be shown to be virtuous, good, or correct by looking at the result in the real universe.” In part 1 of this series, I claimed that it’s impossible to say whether or not a particular behavior is objectively virtuous, good, or correct for an individual. There is no objective basis for such a judgment.

Take murder, for example. Is there a universal objective consequence if an individual murders? Clearly not, though there may be social consequences (providing there is a social prohibition against murder and an effective system for identifying, prosecuting, and punishing a murderer). In many early societies without such effective social systems, murder frequently had few, if any, bad consequences.

On the other hand, there are clear, universal, objective species-level consequences for some behaviors. For example, eating all your offspring. For mortal, sexually-reproducing species, the universe (or Nature, if you will) “punishes” a species that eats all its offspring. The punishment is extinction. Such behavior in a species literally will not permit it to survive within the existing laws of Nature in the real universe. It cannot be virtuous, good, or correct for the species because it leads to its extinction or non-survival. The resulting extinction is an objective fact of the universe. Eating all your offspring is objectively not virtuous.

This illustrates how an empirical physicalist can search for objective morality when thinking about their own species.

In the previous article I also acknowledged the difficulty in applying this kind of reasoning about morality to individuals. If objective morality only applies to a species as a whole, not to its individual members, how are we to choose what is virtuous behavior? Without an individual objective basis for moral behavior, are we free to do whatever we want?

In the previous article, I quoted a Darian Leigh lecture from “The Reality Thief”:

“It turns out that nature does not care too much about individual members of any species except as they may contribute to the survival of the species as a whole. However, as individuals of an intelligent species, we can choose to synchronize our individual behaviors with behaviors that are important to the species as a whole. We can attend to the needs of our young; we can be responsible shepherds of our environment; we can develop diverse skills and talents; and we can appreciate diversity in others, even if we don’t like their behavior very much.”

Can we choose to do good instead of bad?

Do we have the free will to choose “good” instead of “bad”? Are we responsible for the choices we make?

Let’s think about free will in the context of whether or not the universe is deterministic. If everything were completely determined, if fate were inevitable, there could be no free will. Such an argument can be used in discussions of God. If God “knows” everything that will happen, the universe is deterministic (at least to someone with His level of knowledge), and there is no free will.

Only a non-deterministic universe can host free will.

Luckily, at the deepest sub-atomic level, real matter in the universe is non-deterministic or probabilistic. At its most basic level, chance rules. Heisenberg Uncertainty limits how much we can even hypothetically know about our universe; the best we can ever do sometimes (e.g. for the location of an electron) is to provide a probability distribution.

So, there is almost certainly some freedom of choice in our decisions.

Free will is obviously constrained in humans in the real universe. At best, we select from a set of realistic or feasible options (at least, the ones that occur to us) and those options are often severely limited. Don’t believe me? Okay, I’d like you to will yourself to travel into the center of the black hole at the center of the Milky Way and then return to tell me what it’s like. If that’s too hard, we could pick someplace closer like my favorite cafe in Cuenca. Clearly, “free will” doesn’t mean the will or ability to do anything; exercising our free will is constrained by the realistic options that are open to us in the real universe.

Image result for baby elephantBut free will is even more constrained by the capabilities and characteristics of our own mind. I challenge you not to see the word “elephant” in this sentence. Okay, maybe that wasn’t fair (to be honest, I also chose a picture of the cutest baby elephant I could find to help set up your expectations). It was impossible to predict what I was going to say without seeing the pattern of the self-referential sentence. But, now that you’ve seen that sentence structure, I challenge you not to see the word “rhinoceros” in this sentence. Hard, isn’t it, controlling your mind with your will?

Think of your favorite ice cream flavor. Maybe it’s chocolate and you don’t like strawberry very much at all. Can you use your free will to consciously change your mind about that? To make strawberry your new favorite flavor and to dislike chocolate? Hard to do, if not impossible.

So free will doesn’t allow you to do anything that isn’t physically feasible and it doesn’t even give you much control over your own mental processes. What good is it? Is it even free?

Neuroscience has much to say about the biological basis of free will. Sam Harris in “The Moral Landscape” says:

“The truth seems inescapable: I, as the subject of my experience, cannot know what I will next think or do until a thought or intention arises; and thoughts and intentions are caused by physical events and mental stirrings of which I am not aware.”

One operational test to determine whether or not we have any free will (irrespective of its biological basis) would be to see if we could make a computer program that would emulate your conceptual processes well enough to predict your every decision. (Of course, if you knew about this program, it would affect your subsequent decisions, so that’s problematic.)

I think what I would conclude is that the factors that go into an individual making a non-trivial, non-repetitive decision are sufficiently complex as to make a predicting program very difficult, if not impossible. For me, that’s “free will” enough.

One big issue in the discussion of free will is whether or not people can choose good behavior over bad behavior. If we have no free will, how can we be held responsible in front of the law for our actions? I would say, if we could make a perfect predicting program that would take your genetic predispositions, your environment, all effects on your cognitive data structures and predict what you would do, then the burden of responsibility would actually shift elsewhere. If society can predict perfectly what you’d do in a situation and doesn’t prevent you from that situation then, at least, it shares responsibility with you for your action.

In that regard, while we can’t perfectly predict individual behavior, we certainly can statistically predict some kinds of behavior in certain populations. If society were honest with itself, it would recognize its share of the responsibility. I mean really, if a society reduces social mobility, provides crappy education, has few employment opportunities, floods a sub-population with the need to display wealth, and then is surprised when its members turning to selling drugs to break out of their poverty and isolation, the society is run by fools.

Our tendency to ascribe behavior to the free will exercised by the independent human agent, and thereby to assign credit or blame, reminds me of proximate and distal causes.

Let’s take an example of a pile of sand. I drop grains of sand onto a tabletop, slowly building up a pile of sand. Eventually, the pile becomes unstable and collapses (the physics of this are fascinating). Now, if one is a “strict” reductionist and says everything about how the pile behaves is known from how grains of sand behave, well that’s just stupid.

Related image

The behavior of the pile of sand is a function of the sand grains, the tabletop, gravity, and the structural relationship between these things. If one is a reductionist in the sense of seeing that the components and their relationships together determine the behavior of the aggregate, this would be my view. Such aggregate behavior is *emergent* from the components and their relationships. There is no single component you can use to understand the structural integrity of the pile of sand.

If you knew the properties of the components (sand grains, table tops, gravity, neurons, synapses, etc.) you might be able to deduce their aggregate behavior, but many properties aren’t obvious except in the aggregate. The friction of sand grains arises from edges and microfractures in a way that isn’t obvious in a single grain. But you can study behavior in small aggregates and extrapolate to larger ones.

So, imagine you’re building this increasingly unstable sand pile and you add one final grain. The whole pile collapses in an avalanche. Was the last grain responsible for “causing” the avalanche. For sure, it was the proximate cause, the grain that broke the pile’s structural integrity. But, wasn’t it almost inevitable that if you built a highly-unstable pile, eventually one grain would cause it to fall? Does the responsibility lie with the single grain or with all the grains that contributed to the instability?

Free will, societal influences, and responsibility

Sometimes, we have peculiar views on free will and responsibility. Most democratic countries vigorously protect Freedom of Speech. In the United States, Freedom of Speech is one of the most ardently defended rights. But even it has limits.

A person is not permitted to directly exhort criminal behavior. For example, it might be considered illegal for me to yell, “Rape her”, but it seems perfectly acceptable in the US to say something like, “A married woman must submit to the sexual advances of her husband.” You can even preach that from the pulpit, though marital rape is a crime in all 50 states.

Though both individuals here seem to counsel rape, the generalized version isn’t against the law. Should a male church member force sex upon his spouse following hearing such a sermon, who should be held responsible? The husband? The minister? The laws and lawmakers that permitted the act of exhortation in the first place?

According to wiki: The First Amendment to the United States Constitution guarantees free speech, and the degree to which incitement is protected speech is determined by the imminent lawless action test introduced by the 1969 Supreme Court decision in the case Brandenburg v. Ohio. The court ruled that incitement of events in the indefinite future was protected, but encouragement of “imminent” illegal acts was not protected.

Are we simply too dense, too stupid, to link one act (the preaching) to another (the within-marriage) rape?  In many instances, I’d say, “Yes, we are.”

If we want to say, “people have a kind of limited free will to make certain choices in a non-deterministic world under a variety of influences” should we not make some effort to understand those influences and assign some level of responsibility to them?

How does empirical physicalism view free will?

The empirical physicalist is a social pragmatist. I am not primarily concerned with assigning responsibility. I am more concerned with finding ways to encourage virtuous, good, or correct  behavior—actions that are in the long-term survival interest of the species—in the majority of its members.

So I would recognize both the unpredictability of the universe, the functioning of the human mind (according to the best science available), and the many societal influences on that mind. Then I would ask, “Does this combination lead to the kind of society that is in the best long-term interests of species survival?” If not, I would look to change something. The universe and the human mind are both difficult to change. The obvious, easiest place to begin is with the societal influences that we permit and encourage.

I would begin with societal systems. If you want the limited free will of humans to select some behavior over others, you want to make that behavior most attractive, most rewarding, and least difficult. By altering societal systems, the influences that play in human decision-making, you are best able to alter human behavior.

What do you think? Does this smack of “social engineering” to you? Would you object to being manipulated in a consistent, well-developed manner (especially if there was evidence that it was in your own interest)? Do you object to the thousands, perhaps millions, of big and little ways various special interests already manipulate you? Share your views and insights in the comments.

And thanks for following – Paul.

Tagged , , . Bookmark the permalink.

About Paul Anlee

Canadian author Paul Anlee writes provocative, epic sci-fi in the style of Asimov, Heinlein, Asher, and Reynolds, stories that challenge our assumptions and stretch our imagination. Literary, fact-based, and fast-paced, the Deplosion series explores themes in philosophy, politics, religion, economics, AI, VR, nanotech, synbio, quantum reality, and beyond. "When I was very young, a teacher asked our class to write about what we wanted to be when we grew up. My story was titled 'Me the Everything.' I've been fortunate to come close to fulfilling that dream in my life. Computer programming, molecular biology, nanotechnology, systems biology, synthetic biology, mutual fund sales, and photocopy repair; I've done them all. I've spent way too much of my life in school, eventually earning degrees in computing science (BSc) and in molecular biology and genetics (PhD). 'After decades of reading almost nothing but high-tech science fiction, I decided to take a shot at writing some. I aim for stories that are true to the best available science, while pushing my imagination far beyond the edge of what we know today. I love biology, particle physics, cosmology, artificial intelligence, cognitive psychology, politics, and economics. My philosophy is empirical physicalism and I blog regularly about the science and the ideas found in my novels. I believe fiction should educate and stimulate, as much as it entertains. "I currently live in Cuenca, Ecuador where I study Spanish and Chen-style Tai Chi , when I'm not working on exciting and provocative new stories. Visit my web site and blogs at www.paulanlee.com."

24 Responses to Objective Morality: An Empirical Physicalist Approach (Pt. 2)

  1. jakefelasco says:

    Hi Paul,

    You write…

    “Objective facts are measured by reference to the actual real universe. So when we say something is “objective” or “factual” or “true,” it is by comparing a claim to an observation of the real universe. That’s the only way to be objective; there is no other.”

    Would it perhaps be more accurate to say that objective facts are measured by reference to OUR UNDERSTANDING of the universe?

    To investigate how much we should trust our understanding we might ask this question. When will science end? When will reach the point of learning everything about reality, or at least everything that can be learned? Most people answer this won’t happen for a very long time, or maybe never.

    Now let’s add another factor, the accelerating nature of the knowledge explosion. We can reasonably guess we’ll learn more per year with each passing century.

    If we track an ever accelerating knowledge explosion over a very long time we come to a humbling insight. We currently know close to nothing in comparison to what will someday be known.

    Which means our current understanding of the universe should be treated with high suspicion, and thus any objective “facts” built upon that understanding should not be taken too seriously. A sense of humor about all of this may be the most rational response.

    • Paul Anlee says:

      Hi Jake,
      Thanks for your comment. First, I’d say that “Would it perhaps be more accurate to say that objective facts are measured by reference to OUR PERCEPTION of the universe?” Though it’s true that our perception is filtered through our understanding (i.e. our scientific models) of the universe, and that our perception should include (in this case) the extensions afforded by our instruments, I emphasize that objective facts are first and foremost about perception (evidence) and then both accuracy and replication. That’s why science insists on certain standards for what makes acceptable evidence.

      The question of when does science end is a good one. In my sci-fi novels, even when we gain an understanding of what’s behind the laws of nature, it still doesn’t mean we’ve understood everything that those laws imply in all possible universes. Basically, scientific inquiry (even then) has a long way to go.

      I take exception to the idea of “the accelerating nature of the knowledge explosion.” This is a false perception based on the accelerating nature of the technology explosion and the number of SPUs (smallest publishable units) generated each year. Don’t confuse either of these with actual knowledge.

      Though our scientific knowledge is expanding among experts and in society in general, our widespread ignorance is expanding even faster (try arguing against popular misconceptions of evolution, chemtrails, vaccines, genetic engineering,etc. or God/spirituality on FB and you’ll discover this quite quickly).

      I think we’re running up against the amount of “knowledge” that can reasonably be held in a single human mind. If I go back over any of my old science texts, I can recognize facts that I’ve long since forgotten (likely multiple times) as well as facts whose origin (and supporting evidence) is all but forgotten. So, no. I don’t think we’re on an unending upward spiral of knowledge, at least not without significant brain improvements or AI supplementation (shades of transhumanism!). We’re just as likely a decade or less from a new dark age in the west.

      As for our current knowledge being a mere fraction of all we will know in some future and the implication that “our current understanding of the universe should be treated with high suspicion”, I don’t agree at all. Sure we’ll learn more, but just as Relativity modified Newtonian physics, our new knowledge won’t invalidate things we already know to be true. Our ideas of evolution, for example, are quite complete and will be refined rather than tossed out. The Standard Model of elementary particles is holding up quite well and new particles predicted by string theory at LHC (Large Hadron Collider) energy levels, have not been found. So, a lot of things we know now, are likely to be quite robust into the future.

      As in the past, where we are largely ignorant (where our understanding is obviously incomplete, as in abiogenesis or dark matter/energy), there will be much to learn. We won’t overturn Maxwell or Darwin, but there will be a new understanding that refines current knowledge as we collect new data.

      Unfortunately, many lay people like to use what we don’t know to sow doubt about what we do know. This is reprehensible. Not knowing how life originated doesn’t change much, if anything, about what we do know about how evolution works. Not knowing hardly anything about, for example, how cognition and consciousness works, means there’s a lot of room for new knowledge to be developed. Some areas will change our worldview significantly, but they won’t mean we toss out things we actually do understand fairly well.

      This is not ever a back door to allow in magic or woo. That’s how a lot of people like to use it. They will claim bullshit like “A decade ago you thought fat was bad and carbs were good. Now we don’t think that.” means that “science can be wrong so I shouldn’t trust anything it says.” This is confusing weak evidence (and sadly, overextended conclusions) with strong evidence (chemistry still works). Don’t throw the baby out with the bath water. There is no evidence that science as a whole is problematic.

  2. jakefelasco says:

    Hi Paul,

    Thanks for your detailed reply. A few (too many) thoughts…

    We don’t know anything to be true. There are no objective facts, only theories, which are sometimes clung to stubbornly so as to inflate the authority of the experts. It seems the history of science alone should be sufficient to make this clear.

    Let’s go back a few centuries and I will propose here on the blog that our bodies are filled with thousands of species of tiny creatures too small for the eye to see. Readers of this blog would label such an enormous claim as bunk, fantasy, magical thinking, etc. And then, whoops, it turns out to be true.

    Obviously the Earth is flat and at the center of the universe because even a child can see that for themselves, and this empirical observation is confirmed by all other humans. Except that, oops, this is completely wrong.

    What happens over and over again is that some theory becomes the group consensus, and the experts bet their careers on the theory while dismissing challenges as ridiculous nonsense, and then the group consensus is overturned.

    We have no way of knowing which of today’s dogmas will fall victim to this longstanding pattern, or when it might happen, or what a current dogma might be replaced with. Thus, nothing can reasonably be labeled a fact.

    I’m using the word “dogma” deliberately because that’s what an insistence that science delivers “facts” really is. By using the word “dogma” I’m pointing to the religious-like relationship some people have with science, a new “one true way” which has for some replaced the earlier “one true way” presented by various religions.

    The dogma crushing activity I’m engaged in here is not reprehensible, but rather a pretty cool process called reason.

    Over to you sir!

    • Paul Anlee says:

      Hi again, Jake,

      When you said: “We don’t know anything to be true. There are no objective facts, only theories, which are sometimes clung to stubbornly so as to inflate the authority of the experts. It seems the history of science alone should be sufficient to make this clear.”

      I think you might be confusing what scientists think of as “facts”, “laws”, “theories”, and “truth.” A fact is something like “I jump off a tall building and fall (with increasing acceleration) to my death.” The law of gravity would describe this by the equation F = Gm1m2/r^2, that is the mathematical description of the relationship between the force felt between two bodies (resulting in my acceleration while falling according to a = F/m). To the best of our ability to measure, this law of gravity relationship seems to hold universally. The theory of gravity (post-Einstein) would explain why this law holds by reference to spacetime curvature. Our ignorance in this area is with respect to trying to join quantum theory to special relativity, finding a graviton particle, and explaining spacetime as an emergent property of more fundamental phenomenon.

      Truth, as you’ve said, is scientifically unattainable. If we are truly in a holographic universe or a simulation, we may be fundamentally unable to scientifically (combination of evidence and reason) verify that. On the other hand, we could speculate about ways to “exit” such a universe and see if any of them pan out. I don’t like to put limits on imagination (or the ability of reason plus evidence to support what we imagine), so I’d leave these as open questions.

      I could easily see new theories/explanations of why gravity works the way it does, but it seems highly unlikely that the law of gravity would change with a new interpretation of the cause of gravity. This could still leave open the possibility that discovering gravitonic poles (+/- gravity) could allow us to fine-tune both the sign and magnitude of G in the above equation. That doesn’t make current science wrong about gravity; it just makes our present understanding/interpretation limited. but it also indicates what would need to change about our understanding (what we’d need to discover) in order to alter our interpretation, i.e. we’d need to find gravitons and see that they can have two different signs.

      Similarly, I’ve blogged about how unlikely the human “soul” is (here’s the link on this site). This isn’t to prove that souls can’t exist under any circumstance. Instead, it highlights the parameters that would have to be met or explained. For example, any “theory of soul” would have to explain how some new force (not one of the familiar four of EM, weak, strong, or gravity) that we’ve never detected can both read and cause physiological changes in the brain resulting in experiential changes. We already have decades of solid evidence that the physical brain state determines experience, memory, emotion, etc. (e.g. Wilder Penfield’s stimulation work, corpus callosotomies, fMRI, etc.). A complete “theory of soul” would also explain how the soul thinks, feels, experiences, chooses, etc. Anything less than that is going to be viewed as magical explanations.

      I don’t think it’s useful to compare what might have passed as “scientific” observations or explanations centuries ago with modern science. The scientific method is young and constantly developing. Modern standards for acceptable evidence and theory is much more stringent and complete than extrapolating from a simple, personal observation that the Earth appears flat (although there are still millions of people who seem to put their own local, personal observations over the scientific consensus).

      Your example of bacterial inhabitants of the body is also not a good example. At any stage of scientific understanding, this represents an “hypothesis.” Hypotheses of all kinds still float about in science, particularly in areas where our understanding is hugely incomplete (e.g. string theory). Hypotheses should never be immediately accepted as scientific fact, or even theory. They’re just ideas, until they have some evidence to support them (when they become “facts”) or some plausible, consistent explanation of the phenomenon (when they become contextualized as “theory”). This is simply not a good example.

      When you use such examples as demonstrating how “some theory becomes the group consensus, and the experts bet their careers on the theory while dismissing challenges as ridiculous nonsense” you are confusing hypotheses and theories. Scientists often reject, out of hand, hypotheses that are prima facie at odds with existing data (e.g. why I reject the notion of a human “soul”). When new data is discovered, it can result in overturning of decades-old scientific theory in a heartbeat (e.g. when Mercury’s transit of the sun proved Newtonian mechanics incomplete, or how the Michelson–Morley experiment demonstrated the lack of an ether). In a modern context, there are hypotheses like anthropic global climate change, the cause of aging (oxidation vs. DNA programming), vaccines and autism, or the basis of consciousness where all the data may not be in (or mostly not compelling to non-experts). In these kinds of areas, there may be scientific controversy, but this reflects more what various scientists anticipate/believe but cannot yet prove. At every science talk or conference I’ve ever attended, these are universally couched in qualifications, but the media, social media, and some individuals like to push their claims farther than their data can support.

      That only means scientists are also human; it does nothing to weaken the utility or predictive power of a solid scientific consensus. A lot of press (and some scientists) gets pretty excited about things (that’s just the kind of drama people like) and oversell what we actually can say. You really can’t rely on popular reports (so common in medicine and public health); you have to look at the original papers. I’ve seen an article that claimed “GMOs and glyphosate cause cancer. On reading the original primary literature, I realized the headline had dropped some important qualifications (e.g. in free-fed rats, male only had a slight increase, glyphosate was protective in females) because the group wanted to show GMO and glyphosate was bad. In that particular article, there were nuanced qualifications in the article (e.g. tiny sample size of rats prone to cancer anyway) that got omitted in popular reports. Everybody boiled a nuanced scientific article with some important problems in it down to the headline: GMOs and Roundup cause cancer. This was just bad science.

      The “dogma” of science is not reason. It is evidence. Reason is just a tool we use to extrapolate from existing evidence to explanation (theory) or prediction (hypothesis). The basis of science is evidence. The evidence from careful observation is always what determines what we think to be true. I don’t know, nor can I imagine, any other way anyone would have it. Would you rather rely on claims made in ancient books? Would you rather rely on what someone believes even when it is readily contradicted by evidence? Would you rather rely on who screams loudest or is most popular? I think evidence from our universe is a pretty good acid test.

      So, I’d like some good modern examples if you could, of the kind of “dogma” (what some call “scientism”) that you are referring to. The only “dogma” I can think of is this insistence on evidence (observation of the natural universe) in order to achieve scientific credibility. That has an assumption that the universe is real, perhaps even objectively real (there are recent experiments to suggest it may not be, and we’ll have to deal with those). [Note that one could expand “universe” to mean “multiverse” and modify “objectively real” in a variety of ways without affecting the inherent basis of this assumption.] If one is to claim that the universe is either not real, or not objectively so, I think the burden of proof would rest on the claimant. From the collective experience of all humanity, the universe certainly seems objectively real. It doesn’t care what you believe, nor how strongly or passionately, when you jump from a tall building. History is replete with the splattered bodies of individuals (often on drugs or simply delusional) who thought they could defy the law of gravity.

      Dogma is defined as: a principle or set of principles laid down by an authority as incontrovertibly true.

      There is nothing in science that matches this definition. Authority is never the basis for scientific fact, nor even theory. Certainly not in the long run.

  3. jakefelasco says:

    Hi Paul, thanks for your ongoing comments.

    You offer a good example of what I mean by “there are no facts” when you said this…

    ” A fact is something like “I jump off a tall building and fall (with increasing acceleration) to my death.”

    The idea that we die when we hit the pavement is a theory, not a fact.

    You ask, “So, I’d like some good modern examples if you could, of the kind of “dogma” (what some call “scientism”) that you are referring to.”

    That’s easy. The “more is better” relationship with knowledge which is the foundation of science and modern civilization. This relationship is easily challenged, but the challenge is very rarely welcomed by scientists, or to be fair, anyone else. I’ve been making that case for years and am routinely silenced, ignored, banned etc where ever I make that case. The “more is better” relationship with knowledge is a key dogma of our time, just like a “more is better” relationship with Jesus was a central dogma of 12th century Europe.

    Here’s a few examples to illustrate how weak our power to reason really is.

    1) We’re entering a presidential campaign which is being covered almost around the clock by a thousand media channels. So far at least, I’ve not heard a single journalist ask any candidate whether they are prepared to incinerate millions of people with just a few minutes warning.

    2) The God debate is built upon the assumption, shared by all participants without questioning, that the only possible answers to the God question are “exists” or “doesn’t exist”. And yet the vast majority of reality, space, does not fit tidily in to either category. And nobody cares, and just keeps on recycling the same old arguments over and over and over again.

    Science is built upon reason, and reason is a weak medium indeed. Evidence: We have thousands of hair trigger hydrogen bombs aimed down our own throats, a reality we rarely find interesting enough to discuss. Almost the entire culture is lost in this irrationality, including intellectual elites of all flavors.

    We should be quite wary of what we think we know, just as we are wary of conclusions reached by children.

    • Paul Anlee says:

      Hi Jake,

      I see I should have been more explicit. I meant to say the following statement is a fact after the actual event: “” A fact is something like “I jump off a tall building and fall (with increasing acceleration) to my death.”

      In response, you said: “The idea that we die when we hit the pavement is a theory, not a fact.” Actually, I meant my statement to reflect a fact of something that is stated after it actually happens. You fell, you died (in the biological sense). It would be an hypothesis before the fact of the jump (i.e. “If you jump, you will die.”). I guess you could call it an hypothesis (NOT a theory in and of itself), if you were to phrase it something like, “If you jump off a tall building, you will die biologically when you hit the ground but your eternal soul will live on.” In order to be an actual (pseudo-)scientific theory, it would have to include statements about the existence of a soul and its nature. I would only call it “scientific” once it is based on anything observable.

      Your example of “scientific dogma” (“The “more is better” relationship with knowledge which is the foundation of science and modern civilization.”) is more philosophical than scientific (although one could place it within a science of sentient beings and the development of their civilization, if such a science existed). I was expecting something along the lines of “evolution is dogma because it is not evidence based.” But, even your example isn’t dogma (defn: set of principles laid down by an authority as incontrovertibly true), as no “authority” has laid down this principle. Instead, the general principle might be said to derive from experience.

      Past experience simplistically suggests that a deeper understanding of how the world works (e.g. leading to electronics, improved crop yields, improved ability to generate energy and efficiently utilize it, vaccines, antibiotics) has led to a demonstrably “better” world, at least according to some standards (e.g. life/health spans, general comfort, transportation, trade, communication, entertainment, computation, etc.). So the “more is better” belief is somewhat rooted in a few hundred years of science. Even prior to that, more knowledge of weaponry and ballistics was at least demonstrably good for certain societies struggling to survive barbarian invasions. So I’d say “more knowledge is better” comes from experience rather than authority.

      That said, you gave a good example of the double-edged nature of, e.g., nuclear knowledge (particularly scientific and technological knowledge), and there are many other such examples (synbio, nanotech, AI) with similar potential problems. Does this make “more knowledge is better” dogmatic? Not really. We still have lots of examples where a little knowledge caused harm (e.g. air and water pollution from many early industrial processes) which was relieved NOT by political or social will, but by a subsequent improvement in science and technology. I would hypothesize that many (if not most) problems caused by science and tech will have solutions based in science and tech, rather than social consciousness or politics (though either of these areas may motivate the search for a scientific solution).

      The alternative to “more knowledge is better” is “some knowledge is too dangerous to know.” While you might use the nuclear example (or synbio or nanotech or AI in upcoming years), but your thesis would require you to demonstrate that 1) the knowledge caused a problem that more knowledge couldn’t fix, and 2) the knowledge had no value to society to offset any potential harm. I think these would be difficult to demonstrate.

      Scientists (as opposed to politicians, who generally have almost no scientific background) are generally aware of the double-edged nature of more knowledge. I’ve sat on forums looking at what we can do to lessen the potential damage from AI, nanotech, and synbio. Some people have put forward a strong precautionary principle (don’t develop it until you’re sure it’s safe) as something we should adopt in this context. It is one of the stupidest ideas I’ve ever heard. Proving “safety” (e.g. lack of any harm) is proving a negative (impossible) and even determining standards is fraught with difficulty. People hate (HATE!) when some govt agency says “Vaccines can cause severe harm in 1 out of a million people.” Yet people readily accept that they have a 1 in 572 chance of dying in a car crash. People go ape-shit over the former (“They’re trying to kill us!”, but joyously accept the latter.

      You keep saying “Science is built upon reason” but it isn’t. This is just wrong. Science is built upon evidence. We only use reason in two ways: 1) to formulate hypotheses from existing evidence (or sometimes evidence plus imagination), and 2) to formulate theories (collections of evidence and mechanistic explanations) that must be logically cohesive (non-contradictory). That’s all. Evidence is FAR more important than reason in science. Reason based on an incomplete or plain wrong set of data will lead to incorrect deductions almost always. Evidence is king in science.

      I would challenge your argument about the “God debate.” Certainly, in addition to “exists” or “doesn’t exist” (a logically mutually exclusive pair, it would seem), there’s also “ill-defined” and “unknowable”. You’ll find all of these arguments used in debate. That’s why we have theists, atheists, agnostics, and ignostics. I claim that any sufficiently well-defined, non-trivial (i.e. includes statements about His capabilities/powers and intentionality) version of “God” can be demonstrated to not exist (i.e. to have no material influence in this universe). I’ll stand by that, though I know even the word “exist” is difficult to define.

      And, although there are questions about the nature of space(time), I don’t think it can be said there’s a big problem fitting it into the concept of “existence.” Clearly, at the very least, location in space of two fermions with the same quantum properties is prohibited by Pauli exclusion, so that’s one possible definition of “space” that shows it “exists” in the sense it determines the relationship between particles. One can’t demonstrate any such physical effect of the “God” concept.

      Even though I agree that the hair-trigger nuclear bombs aimed all over the planet is one example of our inability to reason about the organization of societies (e.g. protection of the nation state)-and there are many other possible examples-this says nothing about science or knowledge in general. Indeed, if one were to draw any conclusion, I’d say that our scientific, technological, and societal developments are far beyond the understanding of any single person. If anything, we need to develop our ability to understand our world/universe rather than wallow (proudly?) in our perpetual ignorance. I’d claim that the increasing adaptation of pseudoscience, superstition (e.g. astrology), and religious belief does far more harm to society than a push for more knowledge. That said, the first super General AI to be developed could easily make biological humans irrelevant on the galactic stage. We’ll stand no chance compared to the level of knowledge and understanding that an SGAI would be capable of. But I don’t think SGAIs will threaten us (what do we have that they need?), but they’ll just move on.

      Finally, I’ll agree with your statement that “we should be quite wary of what we think we know.” Skepticism is always a good idea, especially if a power-based authority is trying to convince you of something. But expertise-based authority is a shortcut to having to personally recapitulate the entirety of sci-tech knowledge on your own. Many people have a hard time distinguishing between the two types of authority. Use experts, eschew power.

      Hopefully that clarifies some things. If you, like many others, want to say there should be a limit to what we know and understand, I’d challenge you to say where that limit should be (e.g. now, somewhere in the near future, or somewhere in the past) and show how that might influence global society. But, please, please, be careful not to conflate sci-tech thought and political talk. Politics is still an art not a science (much as I wish we could reason our way forward). The fact that idiots have their finger on the nuclear launch buttons is not because we know how to make bombs, but because we still haven’t figured out how not to make war.

  4. jakefelasco says:

    Hi Paul,

    You write…

    “Your example of “scientific dogma” (“The “more is better” relationship with knowledge which is the foundation of science and modern civilization.”) is more philosophical than scientific ”

    Yes, agreed, it’s a dogma of science culture (our relationship with science) and not of the scientific method specifically. I agree it’s important to make this distinction, and I need to be more careful about that.

    You write…

    “But, even your example isn’t dogma (defn: set of principles laid down by an authority as incontrovertibly true), as no “authority” has laid down this principle.”

    Here I disagree a bit, and would propose the authority is the group consensus. The group consensus is the authority most of us refer to most of the time, we look around at what everybody else is thinking and doing and typically assume that must be valid, because we feel everybody couldn’t be wrong. The “more is better” relationship with knowledge is one such example.

    More specifically, scientists have a great deal of cultural authority, and I’ve yet to meet the scientist who is willing to challenge the “more is better” dogma in any fundamental way. Challenge a bit around the edges here and there perhaps, but not challenge the principle itself. The public looks to scientists for leadership in much the same way we used to look to the clergy, so if scientists are wrong about something, that’s a problem.

    You write…

    “Scientists (as opposed to politicians, who generally have almost no scientific background) are generally aware of the double-edged nature of more knowledge.”

    Intellectual elites ALWAYS propose that they already know whatever the public brings to them. I’m convinced beyond doubt that this is simply not true. Remember please, scientists have an built in bias in favor of the “more is better” relationship with knowledge because their funding and cultural authority depend on it. They’re just being human, it’s not an evil conspiracy, but still, scientists are probably the last people we should consult on this issue.

    You write…

    “You keep saying “Science is built upon reason” but it isn’t. This is just wrong. Science is built upon evidence. ”

    And evidence is a principle of reason. Sorry, not wrong.

    You write….

    “I would challenge your argument about the “God debate.”

    And you would lose. 🙂 Unknowable is just another form of “exists”.

    As a scientist I would hope you would recognize that the concepts of “exist” vs. “non-exist” are highly simplistic ideas which don’t represent reality very well, and yet, the God debate has been built upon them for centuries. The point here being, our ability to reason is quite limited, thus we should be at least somewhat skeptical of what arises from it’s calculations.

    You write….

    “Even though I agree that the hair-trigger nuclear bombs aimed all over the planet is one example of our inability to reason about the organization of societies (e.g. protection of the nation state)-and there are many other possible examples-this says nothing about science or knowledge in general.”

    I must enthusiastically disagree here. Nuclear weapons, and especially our boredom with them, convincingly demonstrate that we simply aren’t mature enough to successfully manage ever more knowledge delivered at an ever accelerating. But, we don’t like that story line because it’s not sufficiently flattering, so we ignore it. At our great peril.

    You write…

    “I’d claim that the increasing adaptation of pseudoscience, superstition (e.g. astrology), and religious belief does far more harm to society than a push for more knowledge.”

    Please explain how pseudoscience, superstition (e.g. astrology), and religious belief will ever be able to destroy modern civilization in less than an hour. We agree that science is not evil, but only science can deliver such a threat. The very thing you love about science, it’s ability to efficiently deliver credible information, is the very thing that makes it so dangerous.

    You write….

    “The fact that idiots have their finger on the nuclear launch buttons is not because we know how to make bombs, but because we still haven’t figured out how not to make war.”

    Idiots have their finger on the launch button because some very smart scientists gave them that launch button. They did so with good intentions, but the fact remains, without the cooperation of scientists it never could have happened. You know, Roosevelt had no idea how to split the atom.

    Good dialogue! Somewhat limited writing environment, but we’re making due.

    • Paul Anlee says:

      Hi Jake,

      I think we’ve covered a lot of ground and it’s getting obvious there are some intrinsic differences. I’m going to limit the scope of this comment as I’d like to try to narrow down the discussion. I think the core statement in your last comment might be this one: “More specifically, scientists have a great deal of cultural authority, and I’ve yet to meet the scientist who is willing to challenge the “more is better” dogma in any fundamental way.”

      There are parts I’ll agree with and some I’ll disagree with in this statement. I’m curious if you’ve ever worked in science and know actual scientists. When you say “scientists have a great deal of cultural authority” do you mean 1) scientists use the consensus to guide what they believe to be true, or 2) scientists (and what they say) get a lot of (possibly unearned) respect from society?

      If 1) I couldn’t disagree more. Scientists fight among themselves all the time. Nothing is more rewarding to a scientists than discovering that the group consensus is incorrect (i.e. what Einstein did) and overturning the paradigm of the day. If 2), there are studies that show that being a scientist is one of the occupations most respected among the general population. At the same time, there are other studies that show how little scientific knowledge the general public has and how dangerous widespread scientific ignorance can be. If you talk to scientists, you’ll hear many stories about how little respect their is in society for kids who have expressed an early interest in science personal experiences that tell them how little society actually respects them and what they do. You may find this article illuminating as it nicely describes the way actual scientists view their work at the edges of knowledge (it may align more closely with your own perspective, but it highlights the opposite of the arrogance that you seem to ascribe to most working scientists).

      But, I think the more fundamental part of your statement is where you say “I’ve yet to meet the scientist who is willing to challenge the “more is better” dogma in any fundamental way.” I’ve tried to portray that this is based on experience (both personal and cultural) rather than dogma. If you prefer to label this “dogma of the consensus authority” rather than “the result of hundreds of years of sci-tech history” that’s your prerogative.

      But my question to you, the central question of this discussion, is “What would you replace ‘more is better’ with?” That is, if we were to acknowledge that there should be limits on the kinds of things we wish to study, on the kinds of questions we can ask, on the kinds of curiosity we can encourage, where would we place those limits? Who would decide? How would they decide (given they don’t have the knowledge of the area to know which parts are dangerous)?

      I could think of a few and I’ve worked a little in all of them. We could avoid self-replicating machines. But, you have to be aware that any reasonable definition of “self-replicating machine” would have to include both bacterial and eukaryotic cells. No more research into biofilms, vaccines, stem cells, biofuels, etc. We could avoid research into “General Artificial Intelligence” (GAI) or “Super GAI” (SGAI) but we don’t even know enough about what this is to be able to define useful limits. In the past, we could have limited research into nuclear physics, but then there’s so much of quantum mechanics that would have remained a mystery, we might have never developed the transistor and the large-scale integrated circuits that make possible the computers we are using and the internet over which we are communicating.

      So, if we accept that “more [scientific research and knowledge] is better” is a bad paradigm, how do we replace it? With what principles should we guide human scientific development?

  5. Facebook Profile photoDavidian says:

    hehe, I can see if there is a way to edit the CSS for the comment section do make the line breaks double the size.

    edit: done, seems to be working for me – might need to clear your browser cache if it is not working for you.

  6. jakefelasco says:

    Hi Paul,

    We agree on much. I agree that science is a very efficient and credible method of generating new knowledge, and that scientists are overwhelmingly people of good intentions. I also agree that scientists are enthusiastic about challenging each other’s methods and conclusions, and that this process of challenge is built in to the scientific method.

    However, I’ve yet to meet the scientist who is willing to challenge the “more is better” relationship with knowledge in any serious sustained manner. My contention is that science culture is stuck in the 19th century philosophically.

    This might be compared to Catholic theologians who will debate Bible interpretations with great enthusiasm, but are rarely if ever willing to challenge the foundation of their faith, the claimed divinity of Jesus.

    In both cases, science and religion, there is a great deal of debate WITHIN the dogma, but little to no challenging of the dogma itself. Almost all of the debate takes place INSIDE of the conceptual circle created by the foundational dogma.

    To debunk this claim, feel free to introduce me to the scientist who has publicly wondered whether we should perhaps stop learning new knowledge for a bit while we think through where that process is taking us. How long would such a rational scientist’s career survive?

    You ask…

    “But my question to you, the central question of this discussion, is “What would you replace ‘more is better’ with?””

    Reason! A more mature and sophisticated analysis of our relationship with knowledge.

    Does “more is better” make sense in relation to air, food, water, sex or anything else? Wouldn’t you think me nutzo if I proposed a blind unchallenged unlimited “more is better” relationship with pretty much anything? Isn’t such a “more is better” paradigm simplistic, child-like and probably dangerous, whatever it is aimed at?

    What’s interesting to me is that it’s easy to rip the “more is better” relationship with knowledge to shreds, but we aren’t interested in that, and here’s why. We aren’t really listening to reason, but to the group consensus, to what’s “normal”.

    You ask…

    “That is, if we were to acknowledge that there should be limits on the kinds of things we wish to study, on the kinds of questions we can ask, on the kinds of curiosity we can encourage, where would we place those limits? Who would decide? ”

    Everybody asks me this, and when I admit I don’t have any the answer, they then typically conclude that therefore there isn’t an answer. Which I find quite flattering. 🙂 If Jake can’t answer it, that means there couldn’t possibly be an answer. Yes, let’s go with that!

    You ask…

    “So, if we accept that “more [scientific research and knowledge] is better” is a bad paradigm, how do we replace it?”

    Consider our relationship with food. For endless centuries in the era of food scarcity “more is better” was a sensible relationship with food. Today, in the era of food plenty, that old “more is better” relationship with food is a bigger threat than starvation.

    The place to begin with replacing an outdated paradigm is to grasp that it needs replacement. Once that need is understood by some critical mass of human beings, answers will be found.

    Or, probably more likely, we’ll keep screwing around and wasting time until that one bad day solves the problem for us.

    • Paul Anlee says:

      Hi Jake,

      The reason you’ve “yet to meet the scientist who is willing to challenge the “more is better” relationship” is that you are seeking the impossible: for someone to contradict what they have decided to be their life’s purpose. This is like saying “I’ve yet to meet the politician who is willing to challenge the “voting” idea” (arguably, dictators aren’t politicians) or “I’ve yet to meet the accountant who is willing to challenge the idea of a balance sheet and income statement.” You are asking someone to deny what they see as essential, basic, foundational.

      Further, you admit that you don’t have any answer as to an alternative approach to “more knowledge is better”. That’s not because you’re not smart enough. It’s because it’s impossible.

      Let’s imagine you’re a particle physicist working on the essential nature of matter in the 1940s. You’ve discovered the nucleus and you’re familiar with relativity and the equation about the relationship between matter and energy (E = mc^2). You’re also (obviously) familiar with radioactivity and nuclear fission. Somebody comes to you and says, “This is dangerous! It might open a route to convert radioactive uranium and plutonium into weapons of great destructive potential.”

      As a scientist, your answer would probably be something like, “I’m just interested in the nature of matter. I’m curious about fundamental constituents and I just want to understand how it all works.” If somebody then tried to shut down your research because of potential danger, how would/should you react?

      For many scientists, the first reaction would be ridicule. “Where’s the obvious harm in what I’m doing?” They might even scoff at the basic idea that their research implies a bomb, or they might say, “So what? We already have demonstrated our ability to kill each other in enormous numbers (think of the millions who died during WWI by mustard gas-simple chemistry). Whether knowledge is used to create weapons isn’t a scientific question but rather a political one.”

      Almost any scientific knowledge or technological development has the potential for weaponization. Even our dependence on the internet has permitted it to be weaponized through cyber-warfare, just by removing it, stealing information, or subverting its use (e.g. in real time process control). Many scientists discuss this double-edged nature of knowledge all the time. As I said previously, I’ve been on panels with people like Drew Endy (synbio) and Lori Sheremeta (nanotech and genetic engineering). Recently, a number of scientists have spoken out against human genetic engineering through CRISPR technology as well as against the unregulated adoption of weak AI for surveillance and public data collection. These aren’t as general as questioning “more knowledge is better” but they indicate that scientists are aware of, and sensitive to, the double-edged nature of expanding knowledge.

      Looking out on the modern scientific landscape, can anyone tell what knowledge we might gain that could cause the potential for big problems later? If you could do that, you’d already have that knowledge. It’s like trying to predict whether or not a “flux capacitor” is going to have more potential for good or evil. The answer is that knowledge may be benign, but the uses to which people put that knowledge may not be so benign.

      So, I couldn’t tell you whether developing nanotech, quantum computing, general artificial intelligence, genetic engineering, or a host of new areas of science would be ultimately beneficial or detrimental to humanity.

      Would improving crop yield (by improving Rubisco efficiency) be good or bad? It would allow us to feed more people but that would increase the human population. Good or bad?

      Would being able to cure cancer be good or bad? People would live longer, healthier lives but there’d be more older people around for longer, straining pension systems and the planet. What about other methods for directly increasing human lifespan and healthspan? Good or bad? Is the alternative (not trying to cure diseases that “naturally” limit human numbers and lifespan) be good or bad?

      Would being able to cram more transistors, more CPUs onto a single chip be good or bad? Smaller computers, faster, and more efficient sounds good, but it would also likely lead to greater ability of govts to monitor their residents. Good or bad?

      Knowledge is NOT the problem.

      Rather I’d claim, ignorance is the problem.

      It’s ignorance of the ramifications of anthropic activity that allows world leaders to contemplate things like global nuclear destruction, thinking their political group can somehow bounce back from that kind of devestation. It’s ignorance that leads many to believe the planet is a source of infinite resources and infinite ability to handle human pollution (whether phosphates, plastics, or CO2).

      It’s ignorance (and greed) that chains people to an artificial economic system (all economic systems are artificial by nature – see my post on this on my personal site) that demands perpetual growth on a resource-limited planet. It’s ignorance (of other languages and other cultures) that allows some greedy people to wage war on their fellow humans. It’s ignorance that allows people to be hyper-nationalistic and patriotic. It’s ignorance that continues to allow people to believe in fictitious Supreme Beings that bless their wars and their particular tribe.

      Ignorance, not knowledge, is the problem.

      We can’t predict (literally can’t, without already knowing) all the ramifications of some piece of knowledge or technology. We can’t predict whether that new knowledge has a greater beneficial propensity or has more potential for evil uses.

      We can predict that national govts will always look for a national advantage over other nations and we can predict this will lead to (foolish and unnecessary) conflict. We can predict that general ignorance, scientific ignorance, and ignorance of other cultures will lead us to make poor choices, to favor conflict where cooperation would work better for a larger number of humans.

      I’d argue we need more knowledge not less. If we had some reason to believe that stopping all science for a decade or two would allow us to contemplate our relationship to our fellow humans and to the planet, and would lead to increased wisdom, maybe a pause would be good. But I see no indication that wisdom develops without pressure that makes that wisdom necessary. People just don’t seem to solve problems much before they become problems. People (especially politicians) don’t generally take the long, inclusive, contemplative view. We just aren’t equipped to deal with that kind of thinking. Perhaps the development of new AI tools will help us in this regard. Or perhaps, that’ll just make our own human limitations all the more evident and encourage us to retire from the galactic scene in favor of the smarter, more rational machines we invent.

  7. jakefelasco says:

    You write, “The reason you’ve “yet to meet the scientist who is willing to challenge the “more is better” relationship” is that you are seeking the impossible: for someone to contradict what they have decided to be their life’s purpose.”

    So, you’re saying that scientists are more interested in science than in reason? Ok, I could see that. But then, um, don’t expect us to then accept scientists as expert reasoners. I do actually agree with you, and have learned (slowly) that it’s not realistic to expect scientists to undermine their budgets. So I tried philosophers. Wow, what a waste of time that is.

    You write, “Further, you admit that you don’t have any answer as to an alternative approach to “more knowledge is better”. That’s not because you’re not smart enough. It’s because it’s impossible.”

    This happens all the time to me. People consider the “more is better” relationship with knowledge for ten minutes, and then confidently declare it impossible to change, even though we are already underway editing a “more is better” relationship with food which is millions of years old.

    You may of course be right. Best I can tell, you are. We may be incurably lost in defeatism, simply unable or unwilling to edit outdated paradigms of the past. We may be forever stuck in the 19th century. This could very well be true.

    And what that means Paul, if it is true, is that everything we care about is inevitably going to come crashing down around us, making all of today’s scientific research pointless, because everything learned is going to be swept away in the chaos storm that is coming. This is where “it’s impossible” inevitably leads.

    Human beings can’t handle ever more power delivered at an ever accelerating rate Paul. If you require proof of this, please observe the thousands of hydrogen bombs we have aimed down our own throats, set on hair trigger, ready to go at any moment by one button push by one single person.

    This is who science wants to give ever more power to, at an ever accelerating rate. Insane semi-suicidal children.

    You write, “Many scientists discuss this double-edged nature of knowledge all the time.”

    I’m sorry, really not trying to be rude, but I simply don’t buy the idea that scientists get this.

    Scientists are very skilled at developing new knowledge. On this we agree completely. However, they basically have no clue, and not even that much real interest, in where ever more knowledge delivered at an ever accelerating rate is taking us. They blow this subject off routinely, just as you are doing. Where are your articles on the subject Paul?

    Scientists are just going to keep on doing science, more and more and more, faster and faster and faster, whatever the consequences of that might be. They’re stuck in the 19th century philosophically. I accept this now as being the human reality of the situation. I accept that I’m being irrational in expecting it might be different.

    Luckily, I’m 67, so I’ll be departing this nut house soon, and taking my own nuttiness with me. 🙂

    • Paul Anlee says:

      Hi Jake,
      I’m going to admit that I don’t understand what you’re going on about. Of course “scientists are more interested in science than in reason”. Reason (formal logic) is a tool that scientists use; how else could it be? But reason alone is a poor tool to understand our universe. As I’ve said many times before, evidence is paramount over reason for that purpose. It has to be. It’s far to easy to reason yourself into the wrong conclusions due to a lack of data or poor quality data. That’s why reason is a crappy method for understanding and must stand beside (perhaps beneath) evidence. This has nothing to do with undermining their budgets or with scientists being poor at reasoning (they aren’t). It has everything to do with reason being of highly limited use.

      I didn’t say it’s impossible to change our relationship (“more is better”) with knowledge. I just said there are no possible guidelines on how to do this, perhaps other than just turning our back on all knowledge and returning to the superstitious Dark Ages (they’re called that for a reason). Who wants to do that? Certainly not any scientist I’ve ever met. Without that dedication to pursuing new knowledge, why would one go into science? You’re expecting people in a field to actively turn their back on that field? Some have, but never because they “discover” that knowing more is bad. And you’ve done nothing to demonstrate knowing more is bad, just that there are idiots in power who don’t appreciate the destructiveness of their weaponry. You are being unfair to lay that at the feet of scientists. How did you come to the belief that scientists are to blame for the aggressiveness of our idiot politicians?

      Tell me a political (non-scientific and non-technological) solution to climate change that doesn’t plunge global economies into permanent recession or kill 100s of millions (possibly billions) of people. I think the evidence indicates pretty clearly (i.e. increased efficiencies in power production and utilization, solar panels, wind power, geothermal, nuclear, CO2 recycling, etc.) that all workable solutions to climate change have been, and will continue to be, scientific in nature. People who dreamily suggest that we can return to some idyllic, pastoral life without deliberately killing billions of people are either lying or fooling themselves. There is no way back. We simply have to move forward.

      This is not defeatist, far from it. In fact to simply role up all the progress we’ve made, turn the clock back, stop learning anything new, is about as big a betrayal to our essential humanity as I can imagine. It is giving up to go the road you are proposing. I can almost agree that “everything we care about is inevitably going to come crashing down around us” but this won’t be because of sci-tech. It’s far more likely to be because people don’t understand the artificiality of our current financio-economic system, they don’t understand the dictatorship of the wealthy and privileged, and they will follow their idiot leaders (#45 is the best example I can think of for this) with their ignorant minds made up and their eyes closed to other possibilities for how to organize human society and activity. Fortunately, financio-economic systems can be rebuilt and I’m working on some proposals over on my site.

      I’m not blowing you off. You’ve really said nothing for scientists to get. I personally know many who are concerned about specific ethical implications of their research. But there is no evidence, not even all those hydrogen bombs, that suggest the problem lies in our knowledge rather than in our politics, leadership, military, and public ignorance. It was the leadership and the military who decided all those bombs were needed. They never asked scientists about that. In fact, it’s trivially easy to point to tons of societal benefits science has bought us. So the cost-benefit equation nets out to the benefit, I think.

      I disagree with your statement “they basically have no clue, and not even that much real interest, in where ever more knowledge delivered at an ever accelerating rate is taking us.” Even though I’m now retired from science, I still see newsfeeds everyday where scientists are discussing the implications of their research. As I said in my last comment, there really is no way without actually developing the knowledge to know where it might take us. How many could have predicted in the 1950s that the development of tube-based computers would lead to laptops and smartphones and the internet? We can only make decisions about the cost-benefit of knowledge after we know what that knowledge entails.

      That’s why what you’re asking is impossible. There’s no reluctance to address our relationship to knowledge. The specifics are literally impossible. That’s why you and others are unable to say where we should stop. I’ve given you several specific challenges in synbio, nanotech, and AI. You can’t answer them because they’re unanswerable without first developing the knowledge you want to avoid. We have no ability to predict what science will uncover (and what the technological ramifications will be) until we actually discover it. I’m not sure why you don’t understand why what you’re asking is inherently, intrinsically impossible. I’ve tried to reason it out for you, but I can’t understand it for you.

      I’d write about it if it actually were a subject, but it’s not. It’s an impossibility and I don’t get involved in those.

      However, there might be “good news” for you. Your perception that science grows “faster and faster” is a confusion of the rate of publication with the rate of knowledge growth. When I was programming computers professionally, we used to release updates to our software every few years. Some of them were significant improvements, but most just tweaked the “look and feel”. I’d say companies like Microsoft have been doing this for some time. Science isn’t growing in actual knowledge as fast as some people think. Doing science is hard. New knowledge that substantially adds to our understanding takes time. I can easily sequence the genomes of a number of new organisms without gaining any new understanding of those organisms. Figuring out what genes do in the context of their specific organism takes time. That’s why some physicists have even started questioning building yet more powerful particle accelerators as the current generation (the LHC) has failed to find any new particles (predicted by string theory) since the Higgs boson (predicted by our Standard Model).

      So, outside the political arena, we may have more time to develop an understanding of the technological and societal ramifications of new science than you think. But I simply can’t see how stopping scientific research or technological development in any area is going to make any difference to whatever disasters you may see about to befall us.

      I read tons of economics, finance, and politics everyday. There are lots of indicators where ignorance in our leadership is leading us to poor policy (trade wars, sanctions, inhumane immigration, denial of medical care, pension plans about to blow up, etc.). Even the decision on whether or not one particular orange fool pushes the button that orders the launch of enough nuclear missiles to destroy civilization has more to do with the ignorant belief-system of that fool than it does with the fact of nuclear fission or fusion. He probably believes he will somehow be “raptured” into his deserved heaven. But any objective examination would tell him that death is death, there is no afterlife and certainly no Heaven; the “rapture” is a delusion, the end of humanity is the end of our story, period.

      Again, it is unfair to put that on science or technology. It is a purely political decision by someone with little understanding of the world and little empathy for anyone that isn’t a rich friend. I’d hope that you could agree with this assessment; I’m completely befuddled with why you want to blame that on scientists. As an analogy, I have written about how foolish American gun laws are. But even I don’t place the blame for mass shootings on the mere knowledge of how to make guns. The decision to proliferate guns throughout a society is a business, social, and political decision, not a technological one. We have to place the responsibility in the right place.

      If you want the last word on this, go ahead. I think I’ve said everything that I can possibly say to convince you that you’re blaming the wrong thing and the wrong people. Unless you have some new point to make, I won’t bother replying to your last words.

  8. jakefelasco says:

    Hi again Paul,

    You write, “I’m going to admit that I don’t understand what you’re going on about. Of course “scientists are more interested in science than in reason.”

    Ok, so let’s game that out for a minute. If my theory should be true, that more and more knowledge without limit will inevitably lead to civilization collapse, then the science being done today is largely pointless, given that most of what is learned will be swept away in a coming chaos.

    If this is true, and it’s clearly debatable, then science culture might be fairly labeled brilliant, but blind.

    It basically boils down to how much power we think human beings can successfully manage. If there is a limit to that, then the “more is better” paradigm will find that boundary sooner or later.

    You write, “I didn’t say it’s impossible to change our relationship (“more is better”) with knowledge. I just said there are no possible guidelines on how to do this”

    Agreed. And that’s because we rarely if ever think about such things, and just keep pushing, pushing, pushing ahead on 19th century philosophy.

    You write, ” You are being unfair to lay that at the feet of scientists. ”

    I agree. We hire scientists to develop new knowledge, and they do what we’ve paid them to do with great skill. So I’m not demonizing scientists. I’m just saying, that’s who has the cultural authority to lead the journey in to a more mature relationship with knowledge. Technically, this should be the job of philosophers, but they are hopeless, and have little to no cultural authority.

    You write, “There is no way back. We simply have to move forward.”

    But I am the one arguing for moving forward, and you are the one arguing that we stick with 19th century philosophy. In your defense, this is very very common, and not just a function of science culture or you personally.

    You write, “This is not defeatist, far from it. In fact to simply role up all the progress we’ve made, turn the clock back, stop learning anything new, is about as big a betrayal to our essential humanity as I can imagine.”

    I’m arguing that we try to learn something new. You’re arguing either that we can’t, or we shouldn’t, or both.

    You write, “You’ve really said nothing for scientists to get. I personally know many who are concerned about specific ethical implications of their research. But there is no evidence, not even all those hydrogen bombs, that suggest the problem lies in our knowledge rather than in our politics, leadership, military, and public ignorance. ”

    Who gave the bombs to the political idiots Paul?

    You write, “I still see newsfeeds everyday where scientists are discussing the implications of their research.”

    Show me the articles where some scientist says we should stop learning for a bit while we figure out where all this is leading. Yes, scientists worry about the implications of their research, while continuing to do that research as fast as they are funded.

    You write, “That’s why what you’re asking is impossible. There’s no reluctance to address our relationship to knowledge. The specifics are literally impossible.”

    They are impossible because cultural elites of all flavors focusing on chanting over and over and over again that it’s impossible. Just as you are doing.

    You may indeed be right. If you are, we are doomed.

  9. jakefelasco says:

    Evolution is obviously real, but yet it doesn’t exist.

    Evolution has no size, no shape, no weight, no mass, no color or form. Evolution meets none of the conditions we typically use to define existence. We can’t pore a cubic foot of evolution in to a plastic bucket.

    We can observe the effects evolution has on life, but we can’t observe evolution itself, because evolution is invisible. Evolution is a process, a concept, an abstraction which has no substance of it’s own, and yet is entirely real.

    Evolution might be described by some as a form of invisible intelligence embedded in reality which works to continually adapt life to it’s environment in every single second in every time and place. Though the process of evolution is mechanical and totally random as far as we know, it does operate in a manner which we could reasonably compare to intelligence. And yet…

    It doesn’t exist. Real. But non-existent.

    • Facebook Profile photoDavidian says:

      // Evolution is obviously real, but yet it doesn’t exist. //

      I think you are using a different definition here.
      Evolution might not be tangible, but it definitely exists.
      Evolution has an objective reality.
      It is a process that happens independent of our minds.
      The process most definitely exists.

    • Paul Anlee says:

      Hi Jake and Davidian,

      You aren’t really aware of it, but you’re both using a quite different and imprecise meanings of the word “exist” than I was. When I apply “exists” to something physical, it has the meaning of: “is composed of fundamental particles in a relationship.” Evolution clearly doesn’t “exist” in this sense; it is conceptual, meaning it is a description of a complex relationship or what I’ve called an “emergent phenomenon” or “emergent property.”

      When I first used this shorthand in a reply to Jake, I was using a shorthand that I thought he might be familiar with. I was clearly wrong. Let’s take a quick look at something else that demonstrates what I mean by this emergent property and why it doesn’t “exist” in the same way that something material exists and can’t be discussed with the same word with the same meaning.

      We can construct a pile of dry sand by adding one grain at a time. Physicists have actually done this to try to understand how and when the sand pile loses its “structural stability.” As the sand pile grows, it gets higher and broader. From time to time parts of the pile may lose that “structural stability” and collapse in avalanches. You can map a computer simulation of the pile of sand and derive a method to map and describe this “structural stability” by looking at the relationship between grains.

      But “structural stability” describes the conceptual relationship between the grains of sand in the pile. It doesn’t “exist” in the same sense that the sand grains “exist”. We can see “structural stability” only in the relationships between the grains; it is a derived (or emergent) property of these relationships. But there’s no way you can really discuss “structural stability” separate from the pile of sand grains that encodes it.

      Structural stability doesn’t “exist”.

      This is an example of things that humans and our natural language aren’t really good at discussing. Yet, many of the important phenomenon or properties that we talk about in everyday conversation are similar emergent (conceptual) properties/phenomenon. In my view, this would include things like “love”, “information”, “consciousness”, etc.

      Evolution is a process; it is a long term conceptual description of changes in allele frequency. That sentence is loaded with things that don’t “exist” in any physical way but are descriptions of relationships between things that do exist. To me, this is the only way in which we can speak of God existing, as a concept not as a physical being.

      Sadly English (like all natural languages) is sloppy around words, many of which have multiple, imprecise (from the perspective of physics) definitions. Words get used without the users being aware they are applying two different definitions.

      In this regard, Jake talked about how “space” also doesn’t fit the “exist vs. non-exist” paradigm. After some more thought, I should probably agree as “exist” doesn’t apply (in the physics sense) to concepts/emergent properties. Perhaps, we should say it “conceptually emerges” or “is a concept.”

      Of course, there are some physicists who make spacetime fundamental, as well as those who see it as emergent. I can’t say which is correct. Perhaps matter is composed of fundamental spacetime; perhaps spacetime is emergent from matter or something even more fundamental. Clearly, we need more precise ways to discuss these things.

  10. jakefelasco says:

    Hi Davidian,

    Right, evolution is real. But it doesn’t meet any of our definitions of existence. No weight, no mass, no shape or form etc.

    If by your definition evolution exists, then ok, why can’t other things which have no weight, no mass, no shape or form, are invisible, never been seen by anyone etc also maybe exist?

    • Facebook Profile photoDavidian says:

      I suppose this is an issue with language, words do have a number of different definitions.
      It does not hold a tangible existence
      but does hold an observable existence in the sense it has an objective reality and supporting evidence.

      //If by your definition evolution exists, then ok, why can’t other things which have no weight, no mass, no shape or form, are invisible, never been seen by anyone etc also maybe exist?//

      See here you are stretching my definition a bit.
      Yes, there are likely many things we have not yet observed that exist… however the things that we would say do exist we would see the effect of.
      We can see the effect of a strong wind, even though we cannot see that wind with our naked eye.
      We can see the effect of evolution even if we cannot see it happening before our very eyes.

      If something has no presence in any way, no way to detect or observe it, it is as good as not existing, until such time it shows us otherwise.

  11. jakefelasco says:

    Paul write, “…evolution is conceptual, meaning it is a description of a complex relationship or what I’ve called an “emergent phenomenon” or “emergent property.”

    That works for me. So we see a phenomena which is very real, but doesn’t exist in the technical sense of being “composed of fundamental particles”. In one sense it’s there, in another it’s not.

    It’s interesting that evolution is rather paradoxical, defying logic in a way. It’s both totally random, but also performs a service for life that can be reasonably compared to intelligence. You know, if we created an auto-adjustment service which runs on it’s own doing something very important, we’d probably label it as an intelligent system.

    So we see something which is both real and non-existent, and also both random and in a way intelligent.

    What I’m attempting to do here is shine a light on the limits of reason, which typically tries to create neat and tidy little conceptual divisions which often don’t reflect the real world as well as we think.

    Does God exist, or not exist? Probably a bad question. The world’s greatest minds, arguing for centuries over an either/or question which is perhaps incapable of ever delivering a good answer. And we continue doing the same thing which has never worked over and over again with great enthusiasm. Einstein’s definition of stupidity?

  12. jakefelasco says:

    Paul writes, “This is an example of things that humans and our natural language aren’t really good at discussing. ”

    Yes! And that’s because thought, that which language is made of, comes with a built-in bias for division. Thought creates boundaries which are conceptually useful, but misrepresent the real world.

    Perhaps what the theists call God is an emergent property of reality, like evolution. Real, but not existing. Random and intelligent too.

    The human mind rebels at the violation of our tidy little conceptual categories. Reality rolls it eyes, and shrugs it off.

  13. Paul Anlee says:

    Hi Jake,
    I’m not a big fan of the idea of thought without concept and the notion that concepts misrepresent the “real” world. Given that the photons that fall on your retina are useless without extracting concepts from the patterns, I don’t find it useful to consider a non-conceptual processing of the world. I understand the Buddhist origin of this idea but I think it’s one of the least useful ideas from that tradition.

    I can’t imagine how “God” could be an emergent property of reality except that it emerges from human thought (and superstition), which is an emergent phenomenon. Still, I’m okay with “real but not existing” (i.e.conceptual). But “intelligence” would require God’s mind to actually exist, so I can’t agree with that.

    Humans have physical brains. I happen to think that our thought is an emergent phenomenon of the activity of those brains. So, in my view anyway, God would need some real (physical) brain in order to be intelligent. That would require existence.

    Further, while it is difficult (if not impossible) to make “tidy little conceptual categories” for everything in the universe, our complex neural nets have no trouble making sloppy, fuzzy conceptual categories. “Reality” has no integral response as it is not a conceptual processor.

  14. jakefelasco says:

    Hi Paul, good to chat with you again.

    Does the thought “Paul is a tall, dark and handsome science nerd” represent you fully? Does your photo on Facebook represent you fully? Or are these just useful symbols which point towards you, but are not actually you? You know, a street sign is not the town it points to.

    More to the point, thought is an electro-chemical information medium which like all natural phenomena is defined by various limits. These limits are form of bias which should be taken in to account.

    A key property of thought is that it operates by a process of conceptual division. Nouns are an easy example to illustrate this division process. The word “tree” suggests a unique separate object, but in the real world there is no such phenomena. Tree is better described as being an aspect of a system involving sun, soil, air, insects, far distant exploding supernova and so on.

    A non-conceptual experience of reality is useful in that it helps shine a light on this built-in bias for division within thought. This is very related to religion, but space is short here so I’ll keep moving.

    As best I can tell, the concept of intelligence is drawn from an immeasurably small slice of reality, human scale on the planet Earth. Thus, the question “is reality fundamentally intelligent (ie. God) or is it just a big machine” may be hopelessly flawed. My best guess is that we’ve been arguing over hopelessly flawed questions for centuries, and that explains why we never arrive at credible answers.

    You say that a God would require existence to be real, but we’ve already shown that the dualistic question of existence (yes or no) may be one of those hopelessly limited concepts which pollutes the God debate.

    I think there is a way out of all this mess. The real purpose of religion is to help manage our RELATIONSHIP with reality. God claims are just a means to that end. My suggestion is that we might focus on this bottom line goal of managing our relationship with reality, and not worry so much about how other people are going about that.

    It doesn’t seem rational to me to endlessly ride the merry-go-round to nowhere of the God debate, given that there is no evidence that this process ever leads to anything but more of the same.

    It does seem rational to focus on what we can control, our relationship with reality. If we shift our focus from one particular means (religion) to the desired end that means is reaching for, we might explore some new territory.

    As for those who feel religion is a threat, I would argue the only way religion will ever be over is if a better means of managing our relationship with reality is made available.