can’t see the forest

Well-adjusted Atheism and the Dialog with Theism Reconsidered

| | | | | | | |

Prof. Richard Dawkins is a biologist, science popularizer, and advocate of atheism whose positions I tend to support and whose passion I deeply admire. His books, such as The God Delusion, sell quite well, and he can be seen in auditoriums and television studios the world over, patiently and persistently explaining—with more than a touch of righteousness, one feels—the truth of evolutionary theory and the perils of religious belief. I happen to share the Professor’s point of view that religious dogmas and mentalities have in practice been at least as aggregately destructive and divisive as benevolently useful, but there are some things about Dawkins’ approach, or at least what I take to be the most common interpretations of it, that I have always found troublesome. These points, I should stress, are not specific to Dawkins’ thought only, and are not meant as personal criticism. I bring them up because I think they are critical to understanding the nature of atheism and its relationships to science and religion, and I don’t believe they’re adequately developed in the work of Dr. Dawkins and others like him. My goals here are to help clear things up and for us all to get along peacefully, and that takes work. :-)

I know it hurts in a punch-to-the-gut way for some of us to subject ourselves to the voice of Bill O’Reilly for any reason, but take a look at this 5-minute interview with Richard Dawkins that happened on the O’Reilly Factor a couple of years ago. I’d like to use it as a starting point:

Notice Dawkins’ statement near the beginning of the interview that science keeps “piling on the understanding.” The implication here is that an individual adopts an atheist worldview as a result of scientific enlightenment leading to metaphysical revelation—almost invoking the idea of scientist as evangelist.

Now, it is certainly true that a systematic and rigorous understanding of nature, and of ourselves as part of it, does not support theism as fact and indeed presents copious factual evidence against a lot of the basic tenets of theist, creationist systems of metaphysics. But it does not follow that all the scientific knowledge in the world could necessarily compel a person to abandon his or her faith. Many prominent religious figures from various traditions, as Dawkins is always quick to point out, accept the theory of evolution in one version or another, even though it might speak unambiguously against certain aspects of the dogmas they represent.

For me, Dawkins is tilting at windmills in his quest to deconvert the faithful through an overwhelming preponderance of scientific data, and potentially alienating those who might benefit most from his message. First of all, you can’t confront real obstinacy with logic and language. Language just isn’t that powerful. Consider Lewis Carroll’s familiar dialog, ‘What the Tortoise Said to Achilles,’ in which he demonstrates the futility of using reason to truly force a conclusion. If C is logically supposed to follow from A and B, a person who accepts both A and B as true can find, literally, infinitely many ways of casting into doubt the logical necessity of C. You can lead the mind to logic, but you can’t make it think, if you’ll excuse the poor humor.

The same point is delightfully demonstrated in this anecdote from atheistwiki:

Many years ago, when I was a Psychology student, we had a lecturer who told stories of his own early life as a young clinical psychologist. One story he told was of a psychotic patient who was under his care. This man was quite normal in other ways, but he believed that he (the patient) was dead. So one day my lecturer decided to try some cognitive therapy on him:

Patient: No, of course not. How could they?
Lecturer: (Sticking a pin in him) Well, how about that?
Patient: Good God! That’s amazing! I was totally wrong! Dead people do bleed!

In the television interview above, O’Reilly doesn’t really “throw in with Jesus” because he is uncomfortable with the lack of the extent to which science has “figured it all out.” That is pretense, an attempt to take away Dawkins’ intellectual leverage, leverage which I think is probably misplaced to begin with. O’Reilly chooses the Cross at Calvary over the Hubble Space Telescope, so to speak, because of the former’s symbolic power and its central nature to the cognitive guidelines along which so many of his neurons are so steadfastly organized, regardless of the intellectual paradoxes and contradictions endemic to his faith. This subtext no doubt resonated powerfully among his viewership. Expressed in modified terms, it would have resonated powerfully among aboriginal Australians, whirling dervishes, or the tribes of the Amazon, and for much the same reasons.

People aren’t religious because they’re uneducated, inherently illogical, or don’t know anything about astrophysics. Isaac Newton was, after all, a deeply Christian man, and J.S. Bach is only one titanic example among numerous composers and artists whose emotionally compelling and intellectually formidable works were dedicated to the glory of God. No, people are religious because religious belief fills certain psychological—some would even say biological—needs, and serves social purposes deeply entrenched in human interactions with one another and the environment on a pan-cultural basis. This is why, even though I enthusiastically agree with Dawkins in many respects, I take issue with his pointed portrayal of religion as an illness or delusion which must be scientifically educated into oblivion.  I think such a goal is neither clearly profitable nor definitively wholesome. If anything, it seems only to fuel misconceptions, ill will, and defensiveness among believers.

As Joseph Campbell eloquently observes at the outset of Primitive Mythology, the first volume of his epic essay The Masks of God:

Every people has received its own seal and sign of supernatural designation, communicated to its heroes and daily proved in the lives and experience of its folk. And though many who bow with closed eyes in the sanctuaries of their own tradition rationally scrutinize and disqualify the sacraments of others, an honest comparison immediately reveals that all have been built from one fund of mythological motifs—variously selected, organized, interpreted, and ritualized, according to local need, but revered by every people on earth.

A fascinating psychological, as well as historical, problem is thus presented. Man, apparently, cannot maintain himself in the universe without belief in some arrangement of the general inheritance of myth. In fact, the fullness of his life would even seem to stand in a direct ratio to the depth and range not of his rational thought but of his local mythology.

Another of my intellectual heroes, the astronomer and author Carl Sagan, once wrote hopefully of a coming time in which the joy of using science and reason to approach the wonders of nature might someday unify nations and cultures in contrast to the various ways in which myth-based approaches have helped to divide them throughout history. This is a noble aspiration, and one full of possibility—for science is at least as capable of building partnerships and enriching humanity as it is of constructing atomic bombs. But science cannot merely take the place of religion any more than one could suddenly impose the Qur’an on heartland America, and, in fact, the latter might be the more feasible possibility. This is because there is something about religious symbolism, ritual, and mystery that is fundamental to the human psyche, and this is, I believe, the main reason that well-meaning free thinkers such as Dawkins and Sagan have sometimes missed the mark in an important sense. As marvelously productive and existentially liberating as the Enlightenment might have been, European philosophers would be ill at ease in a primitive environment where the mystic wisdom of the shaman holds the keys to survival. It was probably not in search of a more rigorous understanding of the cosmos that the Great Pyramids were built, or the Mass in B minor composed.

Ernest Becker, the late anthropologist and psychologist whose writings continue to gain prominence in academia, wrote in The Birth and Death of Meaning an accurate and insightful account of the psycho-social machinery, some of it quite dark, served by religious belief. For example:

No religion gives any easy resolution to its central myth, by which I mean that ideal religion is not for compulsive believers. As psychoanalysis has taught us, religion, like any human aspiration, can also be automatic, reflexive, obsessive. … To believe that one has a higher reason to take human life, to feel that torture and murder are in the service of a divine cause is the kind of mandate that has always given sadists everywhere the purest fulfillment: they are free to remain on the level of the body, to pillage real flesh and blood creatures, to transact in lives in the service of the highest power. What a delight. … Genuine heroism for man is still the power to support contradictions …

Intellectual duality and contradiction lie at the heart of religion, and this has been understood since long before Galileo or Darwin. In Christian apologetics, the problem of theodicy—the existence of evil in the universe of a benevolent and omnipotent creator—has been a central problem for almost as long as Christianity has existed, and there were direct precedents and indirect analogs even before that. This fundamental basis in contradiction and resistance to an objective, rational reality is the crucial strength of religion, not its weakness, as many atheist proselytizers seem to believe. Through the embrace of contradiction, man lives an existence which is defined by his own terms and values, and is able to resolutely justify his actions amid the bloody, pulsating chaos of life, Tennyson’s “Nature red in tooth and claw,” according to an immutable and permanent scheme, which he can conveniently take to be the mandate of the highest and most perfect possible authority. If such a modus operandi is delusional in nature, it is also so deeply central to human cognition that even well-seasoned atheists can be caught invoking the name of God Almighty in moments of real trial or terror.

The paradigm of learning via evidence-based thought is immensely powerful and has been astoundingly productive. Like Sagan and many others, I think there is real hope that it can transform humanity and bring people together to solve problems in which we all have a stake. In fact, it already has: consider the ways in which modern medicine is creating new possibilities in spite of the greed of corporations, and how the Internet is creating intercultural exchange on an unprecedented scale despite its role in porncasting. There is, I must profess, no sense in waiting on Jehovah to end hunger, disease, and divinely foster the sense of interconnectedness and interdependence that humankind needs in order to make good on its present situation, and viewing all the ills of the world as divine will to be bravely and humbly accepted is not an attractive solution to the miseries of real organisms, human or otherwise.

But well-adjusted atheists cannot expect to share their viewpoints as long as the strategy is to supplant religion and to bash ages-old and psychologically central beliefs with a club made of scientific theory, because religious belief is not a question of insufficient evidence to the contrary, and is far from a symptom of a faulty mind. Atheism is arrived at not through an understanding of facts, figures, and logical constructions. It arises from one’s consideration that religion may be a cultural phenomenon, the most basic kind of literature, one whose purpose is not entertainment or even instruction so much as the definition of who people are as individuals and as groups in ways that are fundamental to conscious organisms. The historical record combined with the illuminating discoveries of comparative mythology and psychoanalysis provide, for most, ample support of this conception.

For me, being an atheist is a profoundly empowering experience: it represents the ability to construct one’s worldview in the most pure, honest, vulnerable, and nobly independent sense, and the realization that man has always created his gods, by the hundreds and thousands, in man’s own image. It is empowering precisely because I arrived at it through my own volition. People can be either receptive or hostile to the suggestion, but in no case can they be won over to it. To subscribe to atheism is, I maintain, a process of exchanging one manmade contradiction—that of a perfect, divinely ordained order rife with brutality and strife—for another more constructive and intellectually challenging contradiction, that of rational, organizing man amid such beautiful probabilistic chaos. Such an exchange is the result of volition, not of compulsion, and should be treated accordingly. If atheists expect theists to listen with well-adjusted, open minds, we had better lead by a better example.

Perhaps Sir Francis Bacon said it best when he wrote: “A little philosophy inclineth man’s mind to atheism, but depth in philosophy bringeth men’s minds about to religion.” Scientific thought and achievement certainly conflict with the dogmatic nature of myth-based belief and ritual, but science is not a cure for religion and should not be treated as such. The two are sides of the same coin. Given sufficient time and room and even decently favorable conditions, man’s notion of spirituality will develop in its own way according to his experience of the world around him, just as it always has. And so, atheists and theists alike should move forward in conversation and not in aggravated opposition—a tall order to which we must rise if we are to survive long enough to see our development through.

The ‘Thirsty ’30s?’

| | | | | | | |

Speaking to the United Kingdom’s 2009 Sustainable Development conference, top government scientist John Beddington projects that by 2030 the world as a whole will face critical shortages in food, water, and energy beyond anything yet experienced on a large scale.

According to Professor Beddington, the world of 2030 will be populated by about 8.3 billion people. Demand for food and energy will have increased by 50%, and fresh water demand will have jumped up 30%.

BBC News reports:

Prof Beddington said the concern now – when prices have dropped once again – was that the issues would slip back down the domestic and international agenda.

“We can’t afford to be complacent. Just because the high prices have dropped doesn’t mean we can relax,” he said.

Improving agricultural productivity globally was one way to tackle the problem, he added.

At present, 30-40% of all crops are lost due to pest and disease before they are harvested.

Professor Beddington said: “We have to address that. We need more disease-resistant and pest-resistant plants and better practices, better harvesting procedures.

“Genetically-modified food could also be part of the solution. We need plants that are resistant to drought and salinity – a mixture of genetic modification and conventional plant breeding.

Better water storage and cleaner energy supplies are also essential, he added.

Prof Beddington is chairing a subgroup of a new Cabinet Office task force set up to tackle food security.

While unstable geopolitics, environmental issues such as climate change and pollution, and financial mayhem all clamor for the attention of today’s busy technocrat, some scientists point out that this simple, potent mixture of rising demand for resources and an aggressively booming population is perhaps the biggest problem our global society has currently to address.

Not such a bright guy? Neither are his little guys.

| | | | | | | |

Okay, that’s taking it a little too far. But a study from the UK Institute of Psychiatry published in the journal Intelligence claims to have found direct correlations between a man’s mental aptitude and the cleverness of his sperm.

Working with data from 425 U.S. servicemen in the Vietnam War, the research team found that, “independently of age and lifestyle, intelligence was correlated with all three measures of sperm quality – numbers, concentration, and ability to move.”

Other than making themselves feel better, the scientists are interested in the genetics of intelligence and how they might be related to other measures of fitness and health, such as sperminess. While the statistical links found are small, the researchers say they are valid and telling and cannot be the result of lifestyle factors; it’s not going to make a great difference in their ability to conceive, but men of above-average intelligence definitely tend to produce above-average sperm, the study says.

From BBC News:

Lead researcher Dr Rosalind Arden said: “This does not mean that men who prefer Play-Doh to Plato always have poor sperm: the relationship we found was marginal.

“But our results do support the theoretically important ‘fitness factor’ idea.

“We look forward to seeing if the results can be replicated in other data sets, with other measures of intelligence and other measures of physical health that are also strongly related to evolutionary fitness.”

Dr Allan Pacey is an expert in fertility at the University of Sheffield.

He said: “The fact that it’s possible to detect a statistical relationship between intelligence and semen quality in adult men probably says more about the co-development of brain and testicles when the man was in his mother’s womb, and therefore how well they both function in adult life, rather than suggesting that playing Sudoku can somehow stimulate more sperm to be produced.

“The improvement in semen quality with intelligence observed in this paper is small and therefore it is unlikely to have a big impact on the ability of men of different intelligences to conceive.”

Implant Works to ‘Read Man’s Thoughts’

Posted in brain, computers, medicine, neurology, Science, technology by Curtis on 11/30/07

| | | | | | | |

For eight years, Eric Ramsay has been ‘locked in.’ Since a terrible car crash, Ramsay has been conscious but almost completely paralyzed and unable to communicate with the world around him except through eye movements.

But neuroscientists from Boston University, according to Communist Robot, have implanted an electrode in Ramsay’s brain which, they say, currently allows them to correctly record the sounds Ramsay is imagining about 80% of the time. That is, the patient merely thinks the words he would like to say, and the electrochemical signals recorded by the device are interpreted by the researchers. The implant monitors the activity of about 41 neurons located in an area of the brain which is responsible for generating speech (perhaps Broca’s area? Just an editorial guess).

Soon, the electrode will output to a computer which will play the interpreted sounds back to Ramsay in real time, allowing him to more precisely calibrate the device to the speech he is imagining.

Exponents, Logarithms, and the Number e

Posted in algebra, Education, mathematics, numbers, Science by Curtis on 11/25/07

| | | | | | | |

Mathematics was never one of my academic strengths. Blog admin isn’t a headline on my résumé, either. In college, though, I’ve found myself to be far less averse to the numerical arts than in high school, a change of attitude I chalk up to better teachers and to my own growing curiosity in fields such as cosmology, linguistics, analytic philosophy, and the like. I’m still in no danger of becoming a mathematician, I assure you; consequently, the part of the post wherein I actually know what I’m talking about probably ends with this period.

This term, the prof took us on a scenic, leisurely tour of linear and quadratic equations. When we arrived in the country of exponential and logarithmic functions, however, he quickened his pace severely and proceeded to administer the most grueling, extensive exam of the term thusfar. Naturally, necessarily, I was moved to investigate the material on my own. I am sharing some of my findings in hopes that they may prove interesting and useful to other beginning and intermediate students of mathematics. This is not to mention providing a platform for the more algebraically inclined to prove what a dunce I am. Be kind, I prithee, for sooth.

Contents

Exponents and Rules for Exponentiation

Just as multiplication is a shortcut for repetitive addition, so exponentiation is a shortcut for repetitive multiplication. We know that $2 \cdot 5 = 2 + 2 + 2 + 2 + 2 = 10$ ; similarly, $2^5 = 2 \cdot 2 \cdot 2 \cdot 2 \cdot 2 = 32$ . We read the multiplication as “2 times 5,” and each of these two numbers we call the factors of the resulting product, which is 10. The exponentiation we might read as “2 to the fifth power,” or simply “2 to the fifth,” where 2 is the base, the number being operated on, and 5 is the exponent. An exponent essentially tells us how many copies of the base to multiply to complete the operation. So, $3^2 = 3 \cdot 3 = 9$ , and $3^4 = 3 \cdot 3 \cdot 3 \cdot 3 = 81$ .

There are certain rules for exponentiation, or properties of exponents, if you will, which give them their usefulness as a shortcut to multiplying large sums. Dealing with expressions and equations involving exponents is made much easier if one masters these few concepts from the outset:

1. $a^m \cdot a^n = a^{(m+n)}$ . That is, if we multiply a base to one power times the same base to another power, the product is equal to our base to the power of the sum of the two exponents. For example, $2^2 \cdot 2^3 = 2^5$ , which is 32. Conversely, we know that we could rewrite $2^5$ as $2^1 \cdot 2^4$ , in addition to $2^2 \cdot 2^3$ , since the sum of the exponents is 5 in both cases.
2. $a^m / a^n = a^{(m-n)}$ . Similarly, dividing a base to one power by the same base to another power gives a quotient equal to the given base to the power of the difference of the two exponents. $2^5 / 2^2 = 2^3$ , or $32 / 4 = 8$ .
3. $(a^m)^n = a^{(mn)}$ . If we raise a base to a power and then, in turn, raise that product to another power, the end product is equal to our base to the power of the product of the two exponents. For example, $2^6 = {(2^2)}^3$ , or $64 = 4^3$ .

Special cases and notes on terminology with which to be familiar include:

1. Exponents 1 and 0. Any base raised to the first power is that number itself, so $2^1 = 2$ , and $1449^1 = 1449$ . Any base raised to the 0 power (even base 0, by most interpretations) equals 1, so $2^0 = 1$ , and $1449^0 = 1$ as well. This is because, according to our rule (1) above, we know, for example, that $3^3 = 3^2 \cdot 3^1$ , or $27 = 9 \cdot 3$ . Following the same principle, we see that $3^1 = 3^1 \cdot 3^0$ , since the sum of the exponents 1 and 0 is 1 and since $3^1 = 3$ ; the quantity $3^0$ , then, must equal 1, and indeed any number to the 0 power must be 1 for the same reason.
2. Exponents 2 and 3. The common terminology for $x^2$ is “x squared,” and $x^3$ is read as “x cubed.” This is because of geometry; if we take the length of the side of a square as our base and raise it to the second power, the result is the area of the square in square units. Similarly, the length of the side of a cube raised to the third power gives the volume of the cube in cubic units.

Negative and Fractional Bases

What happens if our base is a negative number, assuming we raise it to a positive, whole exponent?

1. $(-x)^n$ , where n is an even number, produces $x^n$ . That is, $-4^2 = 16$ , and $-5^4 = 625$ . This is because $-n \cdot -n = +n^2$ ; the negatives cancel after one multiplication.
2. $(-x)^n$ , where n is an odd number, produces $-x^n$ . So $-2^3 = -8$ , and $-3^3 = -27$ . This is because $-n \cdot -n = +n^2$ , but that positive product times –n once more gives $-n^3$ , since a positive number times a negative number yields a negative product.

Please note that $(-x)^n$ is not the same as $-x^n$ ! The first generally denotes –x to some power; the second, which also could be written as $-(x^n)$ , asks for the opposite of the quantity “x to some power.” For example, $(-3)^2 = 9$ , but $-(3^2) = -9$. The placement of parentheses determines the order of operations in such examples. When no parentheses are present, the principle that $x^n$ denotes a single quantity in unsimplified form means that the negative sign is meant to apply to that quantity after it has been simplified.

There is no special rule to consider, really, when the base is fractional—the results just look a bit different, since we are essentially multiplying fractions. Consider $(\frac{1}{4})^2 = \frac{1}{4} \cdot \frac{1}{4} = \frac{1}{16}$ , or $(- \frac{2}{3})^3 = - \frac{2}{3} \cdot - \frac{2}{3} \cdot - \frac{2}{3} = - \frac{8}{27}$ .

Negative and Fractional Exponents

Now the soup thickens a bit.

1. $x^{-n} = \frac{1}{x^n}$ . In other words, a base raised to a negative power gives the reciprocal of that base, or 1 over that number. So, $2^{-2} = \frac{1}{2^2} = \frac{1}{4}$ .
2. $x^\frac{m}{n} = \sqrt[n]{x^m}$ . The procedure for fractional exponents is perhaps more precarious to describe than to use in practice. Suppose we are asked to evaluate $4^{\frac{1}{2}}$ . Using our rule formula, we would get $\sqrt[2]{4^1}$ , which simplifies to 2, since any number to the first power is that number itself and since we know the square root of 4 to be 2. Note that the 2 outside the radical sign is superfluous, since we understand a radical alone to mean “square root;” we added it just for clarity in correspondence with our formula. For an example in which the numerator of the exponent is not 1, consider $\large 2^\frac{2}{5}$ . Again, following our rule formula, we get $\sqrt[5]{2^2}$ , which, since we know that 2 squared is 4, would simplify to the fifth root of four, $\sqrt[5]{4}$ .

In terms of dealing with rule (2) in practice, the helpful verbal phrase to know is that “x to the m over n power equals the nth root of x to the m power.” A bit unwieldy, yes; but after a bit of practice it begins to come naturally.

A Visual Perspective on Exponentiation

In this graph, the dark blue line is the graph of $y = x^2$ . You can see that, right where it exits the picture to the north, $x = 2$ or $-2$ and $y = 4$ . The two other graphs that are also parabolic in shape (U-shaped) are those of $y = x^4$ (red) and $y = x^8$ (green). Notice that, the higher the exponent, the steeper the parabola becomes, for obvious reasons.

There are three other graphs in the picture which appear to rise from near the origin (0,0) and fly off to the right; these are the graphs of $y = x^\frac{1}{2}$ , $y = x^\frac{1}{4}$ , and $y = x^\frac{1}{8}$ . Because $x^\frac{m}{n} = \sqrt[n]{x^m}$ , these graphs are equivalent to root functions—that is, they are the same as $y = \sqrt{x}$ , $y = \sqrt[4]{x}$ , and $y = \sqrt[8]{x}$ . The graphs of the even-numbered root functions are shaped like halves of sideways parabolas; the odd-numbered root functions, which include negative y values, are shaped differently.

Note that every graph passes through the point (1,1). This is because 1 raised to any power remains 1. Neat, huh? Well, at any rate, I think so. ;-)

Logarithms in General, and Their Properties

What is a logarithm? We said earlier that, in exponential operations, there is a base, which is the number being operated upon, and an exponent, which tells us how many copies of the base to multiply. Such an exponential expression is the antilogarithm of a logarithmic function of the same base. We know, for instance, that $2^5 = 32$ . But what if we were asked the question: to what power must we raise the base 2 to get a product of 32?

In mathspeak, this is written: $x = \log_2(32)$ . We know the answer is 5, since $2^5 = 32$ .The logarithm of a number, then, is the power to which we must raise a given base in order to obtain that number. The subscript number gives us the base, and the other quantity is the number of which we are finding the logarithm. So, for example, $\log_{10}(100) = 2$ , since $10^2 = 100$ ; and $\log_3(81) = 4$ , since $3^4 = 81$ .

The equation set which defines this identity of logarithms can be written as:

$x = log_b(y)$

$b^x = y$

We can see that, just as division can be used to “undo” multiplication for a given factor, so logarithms can “undo” exponential operations upon a given base. Note that it is not possible to find the real logarithm of a negative number, because the logarithm is, in effect, the value of an exponent. For example, while $\log_{-3}(9) = 2$ , because $(-3)^2 = 9$, it is the base which is negative, not 9. The quantity $\log_3(-9)$ is undefined by real numbers; there is no power to which 3 can be raised to give a product of -9 (although there is an imaginary power—let’s not go there right now!).

There are some general properties of logarithms which make them useful tools in complex calculations:

1. $\log_b(y^a) = a \log_b(y)$ . So, then, for $\log_{10}(10^3)$ , we can create the equivalence $3 \log_{10}(10)$ . That is, the base-10 logarithm of 1000 is the same as 3 times the base-10 logarithm of 10. It checks: $10^1 = 10$ , and $10^3 = 1000$ .
2. $\log_b(b^a) = a$ . This seems self-evident enough. For $\log_{10}(100)$ , the answer is, of course, 2, because $10^2 = 100$ .
3. $\log_b(ac) = \log_b(a) + \log_b(c)$ . This is an expansion of a logarithm. Consider $\log_2(32)$ . We know that $32 = 4 \cdot 8$ , so this rule dictates that $\log_2(32) = \log_2(4) + \log_2(8)$ , or $5 = 2 + 3$ .
4. $\log_b(\frac{a}{c}) = \log_b(a) - \log_b(c)$ . Here we have the sister of rule (3). We can rewrite 4 as $\frac{32}{8}$ , and then can see that $\log_2(32) - \log_2(8) = \log_2(4)$ , or $5 - 3 = 2$ .

Another special rule, the “change of base” rule, is formulated as

$\log_b(a) = \frac{\log_d(a)}{\log_d(b)}$

where d and b represent different bases. For an example,

$\log_2(32) = \frac{\log_{10}(32)}{\log_{10}(2)}$ , or

$5 = \frac{1.50515}{0.30103}$ , approximately. This rule can be used to convert from one base to another, such as when trying to find, say, base-5 logarithms with a calculator that handles only bases e and 10.

Some Uses of Logarithms

Logarithms were an important tool in speeding up meticulous multiplications in the days before computers entered the scene, an application most often credited to the designs of John Napier (1550 – 1617). Volumes which consisted of tables of logarithms and antilogarithms were published, so that, to multiply two quantities, the logarithms of each could be looked up and added together. In turn, the antilogarithm of the sum of these logarithms could be looked up, yielding the desired product of the original factors.

Logarithmic scales are important in numerous applications of science. pH, for example, the measure of chemical acidity or alkalinity, can be defined as:

$\mbox{pH} \approx -\log_{10}{\frac{[\mathrm{H^+}]}{1~\mathrm{mol/L}}}$

That is, the pH of a substance is the negative base-10 logarithm of the concentration of hydrogen ions in that substance, measured in moles per liter. Because pH is plotted along a base-10 logarithmic scale, a substance with pH 8 is 10 times as alkaline as a neutral substance (pH 7), and a substance with pH 5 would be 100 times as acidic as the same neutral substance (the lower the pH, the more acidic the substance; higher pH indicates alkalinity).

The Richter Scale for measuring seismic activity is also base-10 logarithmic; thus an earthquake which measures 4.0 produces a seismograph wave of an amplitude that is 10 times as great as the wave produced by a 3.0 quake; but the actual seismic energy represented by those graph measurements is logarithmic to approximately base-32, so that the energy released by a 4.0 quake is not ten times, but about 32 times, that created by a 3.0 quake.

Logarithms are also of interest because they are capable of mapping the set of positive real numbers (under multiplication) to the set of all real numbers (under addition). In so doing, they describe an isomorphic relationship. Consider that $\log_(n) > 0$ if $n > 1$, and that $\log_(n) < 0$ if $n < 1$. That is, while we cannot take the real logarithm of a negative number, the logarithm of a positive number greater than 0 but less than 1 will be a unique negative number, while the logarithms of all numbers greater than 1 are unique positive numbers. This kind of 1:1 mapping is called a bijection.

Notes on Notation and Nomenclature

How logarithmic expressions are notated depends heavily upon context, as there are no universally agreed-upon conventions. Most modern calculators use log to mean ‘base-10 log’ and ln to mean ‘natural log’ (we will discuss the natural logarithm presently), a convention followed by many engineers.

Because our system of mathematics is normally executed in base-10, owing to the number of fingers of the average human, base-10 logarithms are frequently referred to as “common logarithms.” However, some mathematicians take log to mean ‘natural logarithm’ rather than “base-10 logarithm,” and do not use the ln expression at all. Also, computer technicians sometimes take log to mean “base-2 logarithm” since they deal with binary (base-2) numbers so often.

Suffice it to say, it is best to specify the base of a logarithm if there is a chance of misinterpretation. In most modern mathematics textbooks, the engineering conventions are followed, so that $\log(12)$ means the “base-10 logarithm of 12,” while $\ln(12)$ means the “natural logarithm of 12.”

The number e

$e \approx 2.71828182846$

The mathematical constant e, sometimes called Euler’s number, is a transcendental, irrational number (a non-terminating, non-repeating decimal) with remarkable properties. Specifically, it is the base of the natural logarithm, approximated to three decimal places by the number 2.718.

What is so special about this number? Nothing overtly obvious, perhaps, but e is one of the most important numbers in mathematics, right up there with stars like 0, 1, pi, and phi.

In this graph, the thick, black diagonal line is the graph of $y = x + 1$ , a line of slope 1 which passes through the point (0,1).

The three colored lines represent three exponential functions. The red line is the graph of $y = 3^x$ , the green is $y = 2^x$ ; and, in between these values, sits the thicker blue line, which is the graph of $y = e^x$ , or approximately $2.718^x$ .

E is the only number n such that the derivative of $n^x$ (the derivative is the tangent line—the black line above) has the y-intercept of exactly (0,1) and is there exactly tangent to the curve. Both $2^x$ and $3^x$ slightly miss the mark, but $e^x$ , which is somewhere between them, is precisely tangent to the line $y = x + 1$ as it crosses the y axis.

Jacob Bernoulli (1654 – 1705) may have been the first to discover some of the more remarkable characteristics of e. He discovered its identity (approximately, of course) by studying a problem on compound interest.

Suppose we were to open an interest-bearing account in the amount of $1.00, which happened to pay the remarkable dividend of 100% interest per year. If the interest were compounded only annually, then the value of the account at the end of the year would be$2.00. If the interest were compounded twice annually, though, we would be credited $1.00 times $1.5^2$ , or$2.25, at the end of the year. If it were compounded quarterly, it would be worth about $2.44 at year’s end; compounding daily—that is, compounding 365 times—would put the value at$2.71 after 12 months.

Bernoulli noticed that, as the number of compoundings increased, the extra revenue produced by additional compoundings diminished. That is, while 4 compoundings would produce revenues $0.44 greater than a single compounding, a whopping 365 compoundings would add just$0.27 more revenue than what could be earned with only 4 compoundings.

Noting this trend, Bernoulli calculated that e is the value of the account if the interest is compounded an infinite number of times. The more compoundings, the closer the value of an account with a principal amount of $1.00 at an interest rate of 100% approaches approximately$2.7182818. . .you get the picture.

The following sequence illustrates this progression, and can be tested on a scientific calculator:

1. $(1 + \frac{1}{10})^{10} = (1.1)^{10} \approx 2.5937$
2. $(1 + \frac{1}{100})^{100} = (1.01)^{100} \approx 2.7048$
3. $(1 + \frac{1}{1000})^{1000} = (1.001)^{1000} \approx 2.717$
4. $(1 + \frac{1}{10000})^{10000} = (1.0001)^{10000} \approx 2.7181$

The larger n becomes, the closer the value of $(1 + \frac{1}{n})^n$ approaches e. E is the limit of this expression as n approaches infinity.

Another related and perhaps even more interesting way to define e is as the sum of the infinite series:

$\frac {1}{0!} + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \frac{1}{4!} + \frac{1}{5!} + \frac{1}{6!} \cdots$

! is the factorial symbol. The factorial of a number is the product of all positive integers which are less than or equal to that number. $O! = 1$ because the product of no numbers at all is 1; 1 is the “empty product” in number theory. $2! = 2 \cdot 1 = 2$ , $3! = 3 \cdot 2 \cdot 1 = 6$ , and $4! = 4 \cdot 3 \cdot 2 \cdot 1 = 24$ , etc. The further we expand the series, the closer the sum of the reciprocals approaches e. If we stop at 10!, we get

$\frac {1}{1} + \frac{1}{1} + \frac{1}{2} + \frac{1}{6} + \frac{1}{24} + \frac {1}{120} + \frac {1}{720} + \frac{1}{5040} + \frac{1}{40320} + \frac{1}{362880} + \frac{1}{3628800} \approx 2.71828$

We are already mindbendingly close to e by tossing in the towel at just $\frac{1}{10!}$, but we could continue far past $\frac{1}{1000000000000!}$ , and thereby only slightly more accurately approximate its value.

Such is the magic of e, and why it is one of the most beautiful numbers we know.

Natural logarithms

Natural logarithms are defined as logarithms to the base e, and are most frequently represented by the expression $\ln(x)$ . So, then, the equation $x = \ln(1)$ effectively asks us: to what power must we raise e to obtain the product 1? The answer in this case happens to be 0, since any number to the 0 power is 1.

The following graph shows $y = \log_{10}(x)$ (blue) and $y = \ln(x)$ (red), for comparison.

Note that, on the blue graph, $x = 10$ where $y = 1$ because $\log_{10}(10) = 1$ ; on the red graph, $x \approx 2.718$ where $y = 1$ because $\ln(e) = 1$. Both graphs pass through the point (1,0) because, since any number to the 0 power is 1, the logarithm of 1 is 0 in every base.

A Basic Example of Logarithms in Algebra

In rudimentary algebra, one immediately useful feature of logarithms is to help find the identity of a variable which is an exponent or part of an exponent by “undoing” the exponentiation.Consider the equation $2^{3x} = 10$ . How do we find x?Recall the property for logarithms we listed above which states that $\log_b(y^a) = a \log_b(y)$. This means that $2^{3x} = 3x\log(2)$ . We will use base-10 since it is convenient for computers and calculators; we could, in theory, use any base for a calculation such as this, including, of course e.

The rules of algebra dictate that what is done to one side of the equation must be done to the other; observing this, we get $3x\log(2) = \log(10)$ . We then divide through by $\log(2)$ , yielding $3x = \frac{\log(10)}{\log(2)}$. Now we can divide through again, this time by three, giving us the exact answer:

$x = \frac{\frac{\log(10)}{\log(2)}}{3}$.

If we use a calculator to approximate the base-10 logarithmic values (log button) and perform the arithmetic, we see that $x \approx 1.107$ . Plugging that value back into the original equation to check our work, we see that $2^{(3 \cdot 1.107)} \approx 10$ .

Wikipedia – Exponentiation

Wikipedia – Logarithms

Schizophrenia: An Unpleasant Side Effect of Natural Selection?

| | | | | | | |

Recent studies indicate that schizophrenic conditions may stem from a genetically-triggered maladaptation involving the gene DISC1, which, according to research, has been selected for in evolution even though it contributes to schizophrenia. Compare this with sickle-cell anemia: it is caused by having two mutated copies of a certain gene, while those with just one copy of the mutation are naturally protected against malaria.

One of the key tenets of Darwinism is that adaptations that work against the survival of a species are destined to disappear. So why does schizophrenia continue to linger on? Could it be that it confers some advantage?

For years, scientists struggled to identify an adaptive advantage that might explain schizophrenia’s persistence. Researchers from various disciplines volleyed ideas back and forth. Some argued that the genes implicated in the disease promoted creativity; others believed that schizophrenics were frustrated cult leaders—unorthodox thinkers constitutionally “engineered” to lead segments of humanity to break off from the herd, but who lacked the charisma to effect much change. None of the theories gained much traction.

New research is pointing to a different possibility: There may be no adaptive advantage provided by schizophrenia in and of itself, but rather from some genes that contribute to the disease. According to a study published in the Proceedings of the Royal Society, there is evidence that some of the gene variants associated with schizophrenia—especially a mutation in a gene called disrupted-in-schizophrenia 1 (DISC1)—have been selected for by evolution. This supports the idea that the disease may be a maladaptive combination of mutations that individually have the potential to enhance fitness. It could be a more complicated version of the familiar case of sickle cell anemia: having two mutant copies of a certain gene causes the disease, whereas having only one mutant copy provides protection against malaria.

A recent study headed up by Johns Hopkins University neuroscientists may have found what kind of process goes awry in schizophrenic brains. Researchers found that DISC1 regulates the migration of new neurons in the adult brain. When the levels of DISC1 were reduced in mice during adult neurogenesis, the newborn neurons sped up and overshot their intended targets within the hippocampus, says Xin Duan, a study collaborator. When the neurons finally reached their destinations, they forged an unusual number of connections with neighboring cells, a series of events that might give rise to the abnormal—and quite crippling—brain functions associated with schizophrenia, according to Hongjun Song, a Johns Hopkins neurologist who also worked on the study. It is possible, Song says, that further research will lead to a drug that treats schizophrenia by restoring normal neurogenesis.

So what evolutionary advantage could schizophrenia-related genes bring to people who have some of the genes but not the disease? For now, this remains one of the many open questions about this puzzling condition.

The Chutzpah of Intelligent Design

Posted in evolution, faith, intelligent design, Propaganda, Religion, Science by Curtis on 11/22/07

| | | | | | | |

From the lively Jewcy comes mathematics professor Jason Rosenhouse’s response to an exchange between writer Neal Pollak and Discovery Institute senior fellow David Klinghoffer:

I do not know what you do for a living, but I suspect you are pretty good at it. You probably trained for years to learn the basic elements of your craft, and then honed those skills through more years of on-the-job experience. Now imagine that someone without that training and experience presumes to discourse on your profession. Worse, they make assertions and arguments that are obvious nonsense to anyone versed in the subject. Not an altogether uncommon experience for you, I suspect, but one that is no less annoying for that. . .

. . .

Creationists of all stripes, be they the old-school Bible thumpers or the slightly more sophisticated ID proponents, do very well in public debates and scripted presentations. Any venue, in fact, in which flash and performance art are the main features. But place them in an environment where evidence and logic reign, such as a scientific conference or a courtroom trial, and suddenly they are far less impressive. Why do you suppose that is?

Let us be blunt. The specific scientific claims of ID proponents have been decisively refuted over and over again. Their sleazy use of rhetoric and propaganda has shown they have little interest in open and honest debate. They take quotations out of context, distort evidence, misrepresent whole scientific disciplines, oversimplify difficult ideas, and impugn the integrity of scientists. All the while they claim God’s blessing for their project and invoke conspiracy theories against those who disagree. And when they are done with all that, then they turn around and accuse scientists of being arrogant.

Where I come from we call that chutzpah.

Yes, that certainly just about sums it up.

55 Cancri – A Home Away from Home?

Posted in astronomy, extrasolar planets, Science, SETI, space by Curtis on 11/12/07

| | | | | | | |

Skymania News reports that astronomers working at California’s Lick Observatory have isolated the identity and certain characteristics of an Earth-like planet orbiting 55 Cancri, a star 41 light years distant from our own Sun and remarkably similar in physical characteristics such as core composition, spectrum, and temperature.

The discovery of this planet, approximately 45 times the mass of Earth and located within its star’s “habitable zone”—the orbital stratum in which conditions for the formation of Earth-like life would be optimal—demonstrates concretely what astronomers and philosophers have speculated for centuries: that our own star system, while quite special to us, is far from categorically unique.

While it would certainly be “jumping the gun” to assume that such a planet harbors life simply because of the existence of an optimal configuration, the most profound implication of the discovery—in harmony with other discoveries about extrasolar worlds which continue to surface as technology and techniques improve—is that, in its ability to support life, our own world is hardly the beneficiary of a singular providence of chance or “design.”

It is the fifth planet to be identified in orbit around the star 55 Cancri, a star very similar in type and age to our own Sun, making it a virtual twin of our own solar system.

The star, which is dimly visible to the naked eye in the constellation of Cancer, now holds the record for the number of worlds in orbit, after our own Sun. It lies just 41 light-years away – right on our cosmic doorstep.
Scientists said the new planet is 45 times the mass, or size of the Earth, and has a year 260 days long – the time it takes to orbit 55 Cancri. It was found by measuring the tiny wobble it causes to the star as it orbits. Detecting this was a triumph for the astronomers and took them 18 years of study from Lick Observatory, California, because it had to be separated from the effects of the other planets.

The planet is 72.5 million miles from 55 Cancri, a little less than the distance of the Earth from the Sun, but at an ideal distance for the warmth that life as we know it would need to exist.

Geoff Marcy, of the University of California, said last night: “The discovery has me jumping out of my socks. We now know that our own Sun and its family of planets is not unusual.”

He said that if there is a moon going around this new planet, it would have a rocky surface. Water could form lakes or seas and produce the conditions for life to begin. But he added: “Then all bets are off as to how life could evolve on that moon.”

Fellow discoverer Debra Fischer, of San Francisco State University, said she expected that other Earth-like planets could exist in the star’s habitable zone.

She said: “I bet that gap is not empty.”

She added: “55 Cancri is very much like our own sun. It is about the same size and the same age. It is a solar system that is packed with planets. It has profound implications for how we search for Earth-like planets.”

She went on: “The gas-giant planets in our solar system all have large moons. If there is a moon orbiting this new, massive planet, it might have pools of liquid water on a rocky surface.”

Austin, TX to Require Zero-Energy Homes by 2015

| | | | | | | |

Jetson Green writes on an Austin, Texas city initiative that will ramp up energy efficiency standards through 2015. The big surprise? It may actually save money in the long run:

The City of Austin, after a year of serious research by the Zero Energy Capable Homes Task Force, announced a huge initiative towards requiring all new single-family homes to be zero-energy capably by 2015. Here’s how it works. Today, the city adopted the first in a series of code amendments and a road map of code amendments that will be implemented through 2015. Due to this first series of changes, roughly 6500 new homes built in Austin will be about 20% more efficient. Through 2015, as the code changes ratchet up the efficiency baseline, homes will end up using about 65% less energy than those built today. Then, owners will have the option of adding solar or some other clean tech to get the home to zero energy status.

Speaking of the Zero Energy Homes Initiative, Mayor Will Wynn said, “We’re taking action today that will lower the cost of utility bills, make housing more affordable, help improve air quality and take critical steps in the fight against global warming.

I’m always a bit startled by the phrase “fight against global warming.” I suppose it is more politically neutral than the “fight against industrial excess.”

British University: Oceans Soaking Up Less CO2

Posted in climate change, ecology, Environment, Global Warming, oceans, Science, UK news by Curtis on 10/21/07

| | | | | | | |

From the BBC News:

The amount of carbon dioxide being absorbed by the world’s oceans has reduced, scientists have said.

University of East Anglia researchers gauged CO2 absorption through more than 90,000 measurements from merchant ships equipped with automatic instruments.

Results of their 10-year study in the North Atlantic show CO2 uptake halved between the mid-90s and 2000 to 2005.

Scientists believe global warming might get worse if the oceans soak up less of the greenhouse gas.

Researchers said the findings, published in a paper for the Journal of Geophysical Research, were surprising and worrying because there were grounds for believing that, in time, the ocean might become saturated with our emissions.

The world’s oceans, like the terrestrial biomes taken as a whole, provide an important carbon ‘sink’ through which atmospheric carbon dioxide levels are regulated. Algal blooms that feed on carbon dioxide are one of the main mechanisms through which the ocean participates in the carbon cycle, but, as far as we know, there is only so much that they can handle before saturation begins to occur.

Mounting evidence has suggested to many scientists that the ocean’s regulation of CO2 is a finely-tuned process capable of maintaining an equilibrium in all but the most extreme circumstances. The very real concern of these scientists is that, after over a century of virtually unfettered human industrial emissions, such an extreme circumstance may be here or presently on its way.