can’t see the forest

Exponents, Logarithms, and the Number e

Posted in algebra, Education, mathematics, numbers, Science by Curtis on 11/25/07

Digg it! | Refer to StumbleUpon. | Add to Reddit | Add to del.icio.us. | Add to furl. | Add to ma.gnolia. | Add to simpy. | Seed NewsVine. | Fark!

Mathematics was never one of my academic strengths. Blog admin isn’t a headline on my résumé, either. In college, though, I’ve found myself to be far less averse to the numerical arts than in high school, a change of attitude I chalk up to better teachers and to my own growing curiosity in fields such as cosmology, linguistics, analytic philosophy, and the like. I’m still in no danger of becoming a mathematician, I assure you; consequently, the part of the post wherein I actually know what I’m talking about probably ends with this period.

This term, the prof took us on a scenic, leisurely tour of linear and quadratic equations. When we arrived in the country of exponential and logarithmic functions, however, he quickened his pace severely and proceeded to administer the most grueling, extensive exam of the term thusfar. Naturally, necessarily, I was moved to investigate the material on my own. I am sharing some of my findings in hopes that they may prove interesting and useful to other beginning and intermediate students of mathematics. This is not to mention providing a platform for the more algebraically inclined to prove what a dunce I am. Be kind, I prithee, for sooth.

Contents

  1. Exponents and Rules for Exponentiation
    1. Negative and Fractional Bases
    2. Negative and Fractional Exponents
    3. A Visual Perspective on Exponentiation
  2. Logarithms in General, and Their Properties
    1. Some Uses of Logarithms
    2. Notes on Notation and Nomenclature
  3. The Number e
    1. Natural Logarithms
  4. A Basic Example of Logarithms in Algebra

Exponents and Rules for Exponentiation

Just as multiplication is a shortcut for repetitive addition, so exponentiation is a shortcut for repetitive multiplication. We know that 2 \cdot 5 = 2 + 2 + 2 + 2 + 2 = 10 ; similarly, 2^5 = 2 \cdot 2 \cdot 2 \cdot 2 \cdot 2 = 32 . We read the multiplication as “2 times 5,” and each of these two numbers we call the factors of the resulting product, which is 10. The exponentiation we might read as “2 to the fifth power,” or simply “2 to the fifth,” where 2 is the base, the number being operated on, and 5 is the exponent. An exponent essentially tells us how many copies of the base to multiply to complete the operation. So, 3^2 = 3 \cdot 3 = 9 , and 3^4 = 3 \cdot 3 \cdot 3 \cdot 3 = 81 .

There are certain rules for exponentiation, or properties of exponents, if you will, which give them their usefulness as a shortcut to multiplying large sums. Dealing with expressions and equations involving exponents is made much easier if one masters these few concepts from the outset:

  1. a^m \cdot a^n = a^{(m+n)} . That is, if we multiply a base to one power times the same base to another power, the product is equal to our base to the power of the sum of the two exponents. For example, 2^2 \cdot 2^3 = 2^5 , which is 32. Conversely, we know that we could rewrite 2^5 as 2^1 \cdot 2^4 , in addition to 2^2 \cdot 2^3 , since the sum of the exponents is 5 in both cases.
  2. a^m / a^n = a^{(m-n)} . Similarly, dividing a base to one power by the same base to another power gives a quotient equal to the given base to the power of the difference of the two exponents. 2^5 / 2^2 = 2^3 , or 32 / 4 = 8 .
  3. (a^m)^n = a^{(mn)} . If we raise a base to a power and then, in turn, raise that product to another power, the end product is equal to our base to the power of the product of the two exponents. For example, 2^6 = {(2^2)}^3 , or 64 = 4^3 .

Special cases and notes on terminology with which to be familiar include:

  1. Exponents 1 and 0. Any base raised to the first power is that number itself, so 2^1 = 2 , and 1449^1 = 1449 . Any base raised to the 0 power (even base 0, by most interpretations) equals 1, so 2^0 = 1 , and 1449^0 = 1 as well. This is because, according to our rule (1) above, we know, for example, that 3^3 = 3^2 \cdot 3^1 , or 27 = 9 \cdot 3 . Following the same principle, we see that 3^1 = 3^1 \cdot 3^0 , since the sum of the exponents 1 and 0 is 1 and since 3^1 = 3 ; the quantity 3^0 , then, must equal 1, and indeed any number to the 0 power must be 1 for the same reason.
  2. Exponents 2 and 3. The common terminology for x^2 is “x squared,” and x^3 is read as “x cubed.” This is because of geometry; if we take the length of the side of a square as our base and raise it to the second power, the result is the area of the square in square units. Similarly, the length of the side of a cube raised to the third power gives the volume of the cube in cubic units.

Negative and Fractional Bases

What happens if our base is a negative number, assuming we raise it to a positive, whole exponent?

  1. (-x)^n , where n is an even number, produces x^n . That is, -4^2 = 16 , and -5^4 = 625 . This is because -n \cdot -n = +n^2 ; the negatives cancel after one multiplication.
  2. (-x)^n , where n is an odd number, produces -x^n . So -2^3 = -8 , and -3^3 = -27 . This is because -n \cdot -n = +n^2 , but that positive product times –n once more gives -n^3 , since a positive number times a negative number yields a negative product.

Please note that (-x)^n is not the same as -x^n ! The first generally denotes –x to some power; the second, which also could be written as -(x^n) , asks for the opposite of the quantity “x to some power.” For example, (-3)^2 = 9 , but -(3^2) = -9. The placement of parentheses determines the order of operations in such examples. When no parentheses are present, the principle that x^n denotes a single quantity in unsimplified form means that the negative sign is meant to apply to that quantity after it has been simplified.

There is no special rule to consider, really, when the base is fractional—the results just look a bit different, since we are essentially multiplying fractions. Consider (\frac{1}{4})^2 = \frac{1}{4} \cdot \frac{1}{4} = \frac{1}{16} , or (- \frac{2}{3})^3 = - \frac{2}{3} \cdot - \frac{2}{3} \cdot - \frac{2}{3} = - \frac{8}{27} .

Negative and Fractional Exponents

Now the soup thickens a bit.

  1. x^{-n} = \frac{1}{x^n} . In other words, a base raised to a negative power gives the reciprocal of that base, or 1 over that number. So, 2^{-2} = \frac{1}{2^2} = \frac{1}{4} .
  2. x^\frac{m}{n} = \sqrt[n]{x^m} . The procedure for fractional exponents is perhaps more precarious to describe than to use in practice. Suppose we are asked to evaluate 4^{\frac{1}{2}} . Using our rule formula, we would get \sqrt[2]{4^1} , which simplifies to 2, since any number to the first power is that number itself and since we know the square root of 4 to be 2. Note that the 2 outside the radical sign is superfluous, since we understand a radical alone to mean “square root;” we added it just for clarity in correspondence with our formula. For an example in which the numerator of the exponent is not 1, consider \large 2^\frac{2}{5} . Again, following our rule formula, we get \sqrt[5]{2^2} , which, since we know that 2 squared is 4, would simplify to the fifth root of four, \sqrt[5]{4} .

In terms of dealing with rule (2) in practice, the helpful verbal phrase to know is that “x to the m over n power equals the nth root of x to the m power.” A bit unwieldy, yes; but after a bit of practice it begins to come naturally.

A Visual Perspective on Exponentiation

exponential functions2

In this graph, the dark blue line is the graph of y = x^2 . You can see that, right where it exits the picture to the north, x = 2 or -2 and y = 4 . The two other graphs that are also parabolic in shape (U-shaped) are those of y = x^4 (red) and y = x^8 (green). Notice that, the higher the exponent, the steeper the parabola becomes, for obvious reasons.

There are three other graphs in the picture which appear to rise from near the origin (0,0) and fly off to the right; these are the graphs of y = x^\frac{1}{2} , y = x^\frac{1}{4} , and y = x^\frac{1}{8} . Because x^\frac{m}{n} = \sqrt[n]{x^m} , these graphs are equivalent to root functions—that is, they are the same as y = \sqrt{x} , y = \sqrt[4]{x} , and y = \sqrt[8]{x} . The graphs of the even-numbered root functions are shaped like halves of sideways parabolas; the odd-numbered root functions, which include negative y values, are shaped differently.

Note that every graph passes through the point (1,1). This is because 1 raised to any power remains 1. Neat, huh? Well, at any rate, I think so. ;-)


Logarithms in General, and Their Properties

What is a logarithm? We said earlier that, in exponential operations, there is a base, which is the number being operated upon, and an exponent, which tells us how many copies of the base to multiply. Such an exponential expression is the antilogarithm of a logarithmic function of the same base. We know, for instance, that 2^5 = 32 . But what if we were asked the question: to what power must we raise the base 2 to get a product of 32?

In mathspeak, this is written: x = \log_2(32) . We know the answer is 5, since 2^5 = 32 .The logarithm of a number, then, is the power to which we must raise a given base in order to obtain that number. The subscript number gives us the base, and the other quantity is the number of which we are finding the logarithm. So, for example, \log_{10}(100) = 2 , since 10^2 = 100 ; and \log_3(81) = 4 , since 3^4 = 81 .

The equation set which defines this identity of logarithms can be written as:

x = log_b(y)

b^x = y

We can see that, just as division can be used to “undo” multiplication for a given factor, so logarithms can “undo” exponential operations upon a given base. Note that it is not possible to find the real logarithm of a negative number, because the logarithm is, in effect, the value of an exponent. For example, while \log_{-3}(9) = 2 , because (-3)^2 = 9, it is the base which is negative, not 9. The quantity \log_3(-9) is undefined by real numbers; there is no power to which 3 can be raised to give a product of -9 (although there is an imaginary power—let’s not go there right now!).

There are some general properties of logarithms which make them useful tools in complex calculations:

  1. \log_b(y^a) = a \log_b(y) . So, then, for \log_{10}(10^3) , we can create the equivalence 3 \log_{10}(10) . That is, the base-10 logarithm of 1000 is the same as 3 times the base-10 logarithm of 10. It checks: 10^1 = 10 , and 10^3 = 1000 .
  2. \log_b(b^a) = a . This seems self-evident enough. For \log_{10}(100) , the answer is, of course, 2, because 10^2 = 100 .
  3. \log_b(ac) = \log_b(a) + \log_b(c) . This is an expansion of a logarithm. Consider \log_2(32) . We know that 32 = 4 \cdot 8 , so this rule dictates that \log_2(32) = \log_2(4) + \log_2(8) , or 5 = 2 + 3 .
  4. \log_b(\frac{a}{c}) = \log_b(a) - \log_b(c) . Here we have the sister of rule (3). We can rewrite 4 as \frac{32}{8} , and then can see that \log_2(32) - \log_2(8) = \log_2(4) , or 5 - 3 = 2 .

Another special rule, the “change of base” rule, is formulated as

\log_b(a) = \frac{\log_d(a)}{\log_d(b)}

where d and b represent different bases. For an example,

\log_2(32) = \frac{\log_{10}(32)}{\log_{10}(2)} , or

5 = \frac{1.50515}{0.30103} , approximately. This rule can be used to convert from one base to another, such as when trying to find, say, base-5 logarithms with a calculator that handles only bases e and 10.

Some Uses of Logarithms

test tubes

Logarithms were an important tool in speeding up meticulous multiplications in the days before computers entered the scene, an application most often credited to the designs of John Napier (1550 – 1617). Volumes which consisted of tables of logarithms and antilogarithms were published, so that, to multiply two quantities, the logarithms of each could be looked up and added together. In turn, the antilogarithm of the sum of these logarithms could be looked up, yielding the desired product of the original factors.

Logarithmic scales are important in numerous applications of science. pH, for example, the measure of chemical acidity or alkalinity, can be defined as:

\mbox{pH} \approx -\log_{10}{\frac{[\mathrm{H^+}]}{1~\mathrm{mol/L}}}

That is, the pH of a substance is the negative base-10 logarithm of the concentration of hydrogen ions in that substance, measured in moles per liter. Because pH is plotted along a base-10 logarithmic scale, a substance with pH 8 is 10 times as alkaline as a neutral substance (pH 7), and a substance with pH 5 would be 100 times as acidic as the same neutral substance (the lower the pH, the more acidic the substance; higher pH indicates alkalinity).

The Richter Scale for measuring seismic activity is also base-10 logarithmic; thus an earthquake which measures 4.0 produces a seismograph wave of an amplitude that is 10 times as great as the wave produced by a 3.0 quake; but the actual seismic energy represented by those graph measurements is logarithmic to approximately base-32, so that the energy released by a 4.0 quake is not ten times, but about 32 times, that created by a 3.0 quake.

Logarithms are also of interest because they are capable of mapping the set of positive real numbers (under multiplication) to the set of all real numbers (under addition). In so doing, they describe an isomorphic relationship. Consider that \log_(n) > 0 if n > 1, and that \log_(n) < 0 if n < 1. That is, while we cannot take the real logarithm of a negative number, the logarithm of a positive number greater than 0 but less than 1 will be a unique negative number, while the logarithms of all numbers greater than 1 are unique positive numbers. This kind of 1:1 mapping is called a bijection.

Notes on Notation and Nomenclature

How logarithmic expressions are notated depends heavily upon context, as there are no universally agreed-upon conventions. Most modern calculators use log to mean ‘base-10 log’ and ln to mean ‘natural log’ (we will discuss the natural logarithm presently), a convention followed by many engineers.

Because our system of mathematics is normally executed in base-10, owing to the number of fingers of the average human, base-10 logarithms are frequently referred to as “common logarithms.” However, some mathematicians take log to mean ‘natural logarithm’ rather than “base-10 logarithm,” and do not use the ln expression at all. Also, computer technicians sometimes take log to mean “base-2 logarithm” since they deal with binary (base-2) numbers so often.

Suffice it to say, it is best to specify the base of a logarithm if there is a chance of misinterpretation. In most modern mathematics textbooks, the engineering conventions are followed, so that \log(12) means the “base-10 logarithm of 12,” while \ln(12) means the “natural logarithm of 12.”


The number e

e \approx 2.71828182846

The mathematical constant e, sometimes called Euler’s number, is a transcendental, irrational number (a non-terminating, non-repeating decimal) with remarkable properties. Specifically, it is the base of the natural logarithm, approximated to three decimal places by the number 2.718.

What is so special about this number? Nothing overtly obvious, perhaps, but e is one of the most important numbers in mathematics, right up there with stars like 0, 1, pi, and phi.

e

In this graph, the thick, black diagonal line is the graph of y = x + 1 , a line of slope 1 which passes through the point (0,1).

The three colored lines represent three exponential functions. The red line is the graph of y = 3^x , the green is y = 2^x ; and, in between these values, sits the thicker blue line, which is the graph of y = e^x , or approximately 2.718^x .

E is the only number n such that the derivative of n^x (the derivative is the tangent line—the black line above) has the y-intercept of exactly (0,1) and is there exactly tangent to the curve. Both 2^x and 3^x slightly miss the mark, but e^x , which is somewhere between them, is precisely tangent to the line y = x + 1 as it crosses the y axis.

Bernoulli Jacob Bernoulli (1654 – 1705) may have been the first to discover some of the more remarkable characteristics of e. He discovered its identity (approximately, of course) by studying a problem on compound interest.

Suppose we were to open an interest-bearing account in the amount of $1.00, which happened to pay the remarkable dividend of 100% interest per year. If the interest were compounded only annually, then the value of the account at the end of the year would be $2.00. If the interest were compounded twice annually, though, we would be credited $1.00 times 1.5^2 , or $2.25, at the end of the year. If it were compounded quarterly, it would be worth about $2.44 at year’s end; compounding daily—that is, compounding 365 times—would put the value at $2.71 after 12 months.

Bernoulli noticed that, as the number of compoundings increased, the extra revenue produced by additional compoundings diminished. That is, while 4 compoundings would produce revenues $0.44 greater than a single compounding, a whopping 365 compoundings would add just $0.27 more revenue than what could be earned with only 4 compoundings.

Noting this trend, Bernoulli calculated that e is the value of the account if the interest is compounded an infinite number of times. The more compoundings, the closer the value of an account with a principal amount of $1.00 at an interest rate of 100% approaches approximately $2.7182818. . .you get the picture.

The following sequence illustrates this progression, and can be tested on a scientific calculator:

  1. (1 + \frac{1}{10})^{10} = (1.1)^{10} \approx 2.5937
  2. (1 + \frac{1}{100})^{100} = (1.01)^{100} \approx 2.7048
  3. (1 + \frac{1}{1000})^{1000} = (1.001)^{1000} \approx 2.717
  4. (1 + \frac{1}{10000})^{10000} = (1.0001)^{10000} \approx 2.7181

The larger n becomes, the closer the value of (1 + \frac{1}{n})^n approaches e. E is the limit of this expression as n approaches infinity.

Another related and perhaps even more interesting way to define e is as the sum of the infinite series:

\frac {1}{0!} + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \frac{1}{4!} + \frac{1}{5!} + \frac{1}{6!} \cdots

! is the factorial symbol. The factorial of a number is the product of all positive integers which are less than or equal to that number. O! = 1 because the product of no numbers at all is 1; 1 is the “empty product” in number theory. 2! = 2 \cdot 1 = 2 , 3! = 3 \cdot 2 \cdot 1 = 6 , and 4! = 4 \cdot 3 \cdot 2 \cdot 1 = 24 , etc. The further we expand the series, the closer the sum of the reciprocals approaches e. If we stop at 10!, we get

\frac {1}{1} + \frac{1}{1} + \frac{1}{2} + \frac{1}{6} + \frac{1}{24} + \frac {1}{120} + \frac {1}{720} + \frac{1}{5040} + \frac{1}{40320} + \frac{1}{362880} + \frac{1}{3628800} \approx 2.71828

We are already mindbendingly close to e by tossing in the towel at just \frac{1}{10!}, but we could continue far past \frac{1}{1000000000000!} , and thereby only slightly more accurately approximate its value.

Such is the magic of e, and why it is one of the most beautiful numbers we know.

Natural logarithms

Natural logarithms are defined as logarithms to the base e, and are most frequently represented by the expression \ln(x) . So, then, the equation x = \ln(1) effectively asks us: to what power must we raise e to obtain the product 1? The answer in this case happens to be 0, since any number to the 0 power is 1.

The following graph shows y = \log_{10}(x) (blue) and y = \ln(x) (red), for comparison.

log functions

Note that, on the blue graph, x = 10 where y = 1 because \log_{10}(10) = 1 ; on the red graph, x \approx 2.718 where y = 1 because \ln(e) = 1. Both graphs pass through the point (1,0) because, since any number to the 0 power is 1, the logarithm of 1 is 0 in every base.


A Basic Example of Logarithms in Algebra

In rudimentary algebra, one immediately useful feature of logarithms is to help find the identity of a variable which is an exponent or part of an exponent by “undoing” the exponentiation.Consider the equation 2^{3x} = 10 . How do we find x?Recall the property for logarithms we listed above which states that \log_b(y^a) = a \log_b(y). This means that 2^{3x} = 3x\log(2) . We will use base-10 since it is convenient for computers and calculators; we could, in theory, use any base for a calculation such as this, including, of course e.

The rules of algebra dictate that what is done to one side of the equation must be done to the other; observing this, we get 3x\log(2) = \log(10) . We then divide through by \log(2) , yielding 3x = \frac{\log(10)}{\log(2)}. Now we can divide through again, this time by three, giving us the exact answer:

x = \frac{\frac{\log(10)}{\log(2)}}{3}.

If we use a calculator to approximate the base-10 logarithmic values (log button) and perform the arithmetic, we see that x \approx 1.107 . Plugging that value back into the original equation to check our work, we see that 2^{(3 \cdot 1.107)} \approx 10 .


Further reading

Wikipedia – Exponentiation

Wikipedia – Logarithms

Wikipedia – The Number E

A Collection of Fractal Flythroughs

Digg it! | Refer to StumbleUpon. | Add to Reddit | Add to del.icio.us. | Add to furl. | Add to ma.gnolia. | Add to simpy. | Seed NewsVine. | Fark!

I love me some recursion, folks. Hopefully you’ll enjoy these fractal animations I’ve gleaned and compiled from YouTube. The music is, in my humble opinion, much more forgettable than the video in some instances—but your ears are not mine, so perhaps you’ll hear something I don’t. As for the graphics, though: get out your Mandelbrot Brand spectacles. Enjoy the turbulence while it lasts.

Far more than mere shimmering pretties, fractals are geometric patterns which have fine structure at arbitrarily small scales such that the structure is at least approximately recursive to the shape of the whole. Some examples of naturally occurring fractals include snowflakes, lightning bolts, tree branches, fern leaves [ed. okay, fern fronds]—in fact, fractals seem to be integral to the geometric expression of natural forms in any direction you happen to be looking (the link is to the Wikipedia article with excellent illustrations and explanations). The principle of recursion is fundamental to number theory and has been gaining the attention of mathematicians and cosmologists at least since the days of Leibniz, and increasingly so since the exploits of Benoit Mandelbrot and in this age of all things electro-graphical.

Following Rules is for Squares

Digg it! | Refer to StumbleUpon. | Add to Reddit | Add to del.icio.us. | Add to furl. | Add to ma.gnolia. | Add to simpy. | Seed NewsVine. | Fark!

Over ten years ago, Mitchel Resnick and Brian Silverman of the Massachusetts Institute of Technology came up with an interesting software demonstration of the phenomenon of emergence. Emergence is loosely defined as the appearance of complex architecture or behavior that follows from simple rules, and is a cornerstone of most conceptions of biological evolution. Evolutionists believe that the preponderance of evidence suggests the sufficiency of emergence as a driver for adaptation, that no appeal to a ‘watchmaker’ of super-humanesque intelligence is necessary.

In their demonstration, Resnick and Silverman work with a black, two-dimensional plane composed of small squares which can ‘turn on’ (turn white) according to the action of a very few simple rules upon some initial state. In one example, ‘Seed’ rules are applied: a square turns off if it is on, and a square turns on if exactly two of its eight neighbors are on. One can begin with a very simple pattern, start the engine, and end up with a dizzying array of gliders, blinkers, and asymmetric noise. A slightly more complex set of rules, called ‘Life’ and invented by John Conway in 1970, produces more intricate and unstable patterns.

As the researchers point out, one very interesting aspect of these simulations is that the strange patterns and shapes created through these rudimentary interactions exist only in the mind of the observer—in reality, it’s just a bunch of intermingling black and white squares. The suggestion is that much of what we perceive as reality may be a secondary, artificial construct.

If your browser is Java-capable, you should be able to walk through the site in just a few minutes. Be forewarned, though: it’s hip to be square, and awfully addictive.