Mathematics was never one of my academic strengths. Blog admin isn’t a headline on my résumé, either. In college, though, I’ve found myself to be far less averse to the numerical arts than in high school, a change of attitude I chalk up to better teachers and to my own growing curiosity in fields such as cosmology, linguistics, analytic philosophy, and the like. I’m still in no danger of becoming a mathematician, I assure you; consequently, the part of the post wherein I actually know what I’m talking about probably ends with this period.
This term, the prof took us on a scenic, leisurely tour of linear and quadratic equations. When we arrived in the country of exponential and logarithmic functions, however, he quickened his pace severely and proceeded to administer the most grueling, extensive exam of the term thusfar. Naturally, necessarily, I was moved to investigate the material on my own. I am sharing some of my findings in hopes that they may prove interesting and useful to other beginning and intermediate students of mathematics. This is not to mention providing a platform for the more algebraically inclined to prove what a dunce I am. Be kind, I prithee, for sooth.
- Exponents and Rules for Exponentiation
- Logarithms in General, and Their Properties
- The Number e
- A Basic Example of Logarithms in Algebra
Just as multiplication is a shortcut for repetitive addition, so exponentiation is a shortcut for repetitive multiplication. We know that ; similarly, . We read the multiplication as “2 times 5,” and each of these two numbers we call the factors of the resulting product, which is 10. The exponentiation we might read as “2 to the fifth power,” or simply “2 to the fifth,” where 2 is the base, the number being operated on, and 5 is the exponent. An exponent essentially tells us how many copies of the base to multiply to complete the operation. So, , and .
There are certain rules for exponentiation, or properties of exponents, if you will, which give them their usefulness as a shortcut to multiplying large sums. Dealing with expressions and equations involving exponents is made much easier if one masters these few concepts from the outset:
- . That is, if we multiply a base to one power times the same base to another power, the product is equal to our base to the power of the sum of the two exponents. For example, , which is 32. Conversely, we know that we could rewrite as , in addition to , since the sum of the exponents is 5 in both cases.
- . Similarly, dividing a base to one power by the same base to another power gives a quotient equal to the given base to the power of the difference of the two exponents. , or .
- . If we raise a base to a power and then, in turn, raise that product to another power, the end product is equal to our base to the power of the product of the two exponents. For example, , or .
Special cases and notes on terminology with which to be familiar include:
- Exponents 1 and 0. Any base raised to the first power is that number itself, so , and . Any base raised to the 0 power (even base 0, by most interpretations) equals 1, so , and as well. This is because, according to our rule (1) above, we know, for example, that , or . Following the same principle, we see that , since the sum of the exponents 1 and 0 is 1 and since ; the quantity , then, must equal 1, and indeed any number to the 0 power must be 1 for the same reason.
- Exponents 2 and 3. The common terminology for is “x squared,” and is read as “x cubed.” This is because of geometry; if we take the length of the side of a square as our base and raise it to the second power, the result is the area of the square in square units. Similarly, the length of the side of a cube raised to the third power gives the volume of the cube in cubic units.
What happens if our base is a negative number, assuming we raise it to a positive, whole exponent?
- , where n is an even number, produces . That is, , and . This is because ; the negatives cancel after one multiplication.
- , where n is an odd number, produces . So , and . This is because , but that positive product times –n once more gives , since a positive number times a negative number yields a negative product.
Please note that is not the same as ! The first generally denotes –x to some power; the second, which also could be written as , asks for the opposite of the quantity “x to some power.” For example, , but . The placement of parentheses determines the order of operations in such examples. When no parentheses are present, the principle that denotes a single quantity in unsimplified form means that the negative sign is meant to apply to that quantity after it has been simplified.
There is no special rule to consider, really, when the base is fractional—the results just look a bit different, since we are essentially multiplying fractions. Consider , or .
Now the soup thickens a bit.
- . In other words, a base raised to a negative power gives the reciprocal of that base, or 1 over that number. So, .
- . The procedure for fractional exponents is perhaps more precarious to describe than to use in practice. Suppose we are asked to evaluate . Using our rule formula, we would get , which simplifies to 2, since any number to the first power is that number itself and since we know the square root of 4 to be 2. Note that the 2 outside the radical sign is superfluous, since we understand a radical alone to mean “square root;” we added it just for clarity in correspondence with our formula. For an example in which the numerator of the exponent is not 1, consider . Again, following our rule formula, we get , which, since we know that 2 squared is 4, would simplify to the fifth root of four, .
In terms of dealing with rule (2) in practice, the helpful verbal phrase to know is that “x to the m over n power equals the nth root of x to the m power.” A bit unwieldy, yes; but after a bit of practice it begins to come naturally.
In this graph, the dark blue line is the graph of . You can see that, right where it exits the picture to the north, or and . The two other graphs that are also parabolic in shape (U-shaped) are those of (red) and (green). Notice that, the higher the exponent, the steeper the parabola becomes, for obvious reasons.
There are three other graphs in the picture which appear to rise from near the origin (0,0) and fly off to the right; these are the graphs of , , and . Because , these graphs are equivalent to root functions—that is, they are the same as , , and . The graphs of the even-numbered root functions are shaped like halves of sideways parabolas; the odd-numbered root functions, which include negative y values, are shaped differently.
Note that every graph passes through the point (1,1). This is because 1 raised to any power remains 1. Neat, huh? Well, at any rate, I think so. ;-)
What is a logarithm? We said earlier that, in exponential operations, there is a base, which is the number being operated upon, and an exponent, which tells us how many copies of the base to multiply. Such an exponential expression is the antilogarithm of a logarithmic function of the same base. We know, for instance, that . But what if we were asked the question: to what power must we raise the base 2 to get a product of 32?
In mathspeak, this is written: . We know the answer is 5, since .The logarithm of a number, then, is the power to which we must raise a given base in order to obtain that number. The subscript number gives us the base, and the other quantity is the number of which we are finding the logarithm. So, for example, , since ; and , since .
The equation set which defines this identity of logarithms can be written as:
We can see that, just as division can be used to “undo” multiplication for a given factor, so logarithms can “undo” exponential operations upon a given base. Note that it is not possible to find the real logarithm of a negative number, because the logarithm is, in effect, the value of an exponent. For example, while , because , it is the base which is negative, not 9. The quantity is undefined by real numbers; there is no power to which 3 can be raised to give a product of -9 (although there is an imaginary power—let’s not go there right now!).
There are some general properties of logarithms which make them useful tools in complex calculations:
- . So, then, for , we can create the equivalence . That is, the base-10 logarithm of 1000 is the same as 3 times the base-10 logarithm of 10. It checks: , and .
- . This seems self-evident enough. For , the answer is, of course, 2, because .
- . This is an expansion of a logarithm. Consider . We know that , so this rule dictates that , or .
- . Here we have the sister of rule (3). We can rewrite 4 as , and then can see that , or .
Another special rule, the “change of base” rule, is formulated as
where d and b represent different bases. For an example,
, approximately. This rule can be used to convert from one base to another, such as when trying to find, say, base-5 logarithms with a calculator that handles only bases e and 10.
Logarithms were an important tool in speeding up meticulous multiplications in the days before computers entered the scene, an application most often credited to the designs of John Napier (1550 – 1617). Volumes which consisted of tables of logarithms and antilogarithms were published, so that, to multiply two quantities, the logarithms of each could be looked up and added together. In turn, the antilogarithm of the sum of these logarithms could be looked up, yielding the desired product of the original factors.
Logarithmic scales are important in numerous applications of science. pH, for example, the measure of chemical acidity or alkalinity, can be defined as:
That is, the pH of a substance is the negative base-10 logarithm of the concentration of hydrogen ions in that substance, measured in moles per liter. Because pH is plotted along a base-10 logarithmic scale, a substance with pH 8 is 10 times as alkaline as a neutral substance (pH 7), and a substance with pH 5 would be 100 times as acidic as the same neutral substance (the lower the pH, the more acidic the substance; higher pH indicates alkalinity).
The Richter Scale for measuring seismic activity is also base-10 logarithmic; thus an earthquake which measures 4.0 produces a seismograph wave of an amplitude that is 10 times as great as the wave produced by a 3.0 quake; but the actual seismic energy represented by those graph measurements is logarithmic to approximately base-32, so that the energy released by a 4.0 quake is not ten times, but about 32 times, that created by a 3.0 quake.
Logarithms are also of interest because they are capable of mapping the set of positive real numbers (under multiplication) to the set of all real numbers (under addition). In so doing, they describe an isomorphic relationship. Consider that if , and that if . That is, while we cannot take the real logarithm of a negative number, the logarithm of a positive number greater than 0 but less than 1 will be a unique negative number, while the logarithms of all numbers greater than 1 are unique positive numbers. This kind of 1:1 mapping is called a bijection.
How logarithmic expressions are notated depends heavily upon context, as there are no universally agreed-upon conventions. Most modern calculators use log to mean ‘base-10 log’ and ln to mean ‘natural log’ (we will discuss the natural logarithm presently), a convention followed by many engineers.
Because our system of mathematics is normally executed in base-10, owing to the number of fingers of the average human, base-10 logarithms are frequently referred to as “common logarithms.” However, some mathematicians take log to mean ‘natural logarithm’ rather than “base-10 logarithm,” and do not use the ln expression at all. Also, computer technicians sometimes take log to mean “base-2 logarithm” since they deal with binary (base-2) numbers so often.
Suffice it to say, it is best to specify the base of a logarithm if there is a chance of misinterpretation. In most modern mathematics textbooks, the engineering conventions are followed, so that means the “base-10 logarithm of 12,” while means the “natural logarithm of 12.”
The mathematical constant e, sometimes called Euler’s number, is a transcendental, irrational number (a non-terminating, non-repeating decimal) with remarkable properties. Specifically, it is the base of the natural logarithm, approximated to three decimal places by the number 2.718.
What is so special about this number? Nothing overtly obvious, perhaps, but e is one of the most important numbers in mathematics, right up there with stars like 0, 1, pi, and phi.
In this graph, the thick, black diagonal line is the graph of , a line of slope 1 which passes through the point (0,1).
The three colored lines represent three exponential functions. The red line is the graph of , the green is ; and, in between these values, sits the thicker blue line, which is the graph of , or approximately .
E is the only number n such that the derivative of (the derivative is the tangent line—the black line above) has the y-intercept of exactly (0,1) and is there exactly tangent to the curve. Both and slightly miss the mark, but , which is somewhere between them, is precisely tangent to the line as it crosses the y axis.
Jacob Bernoulli (1654 – 1705) may have been the first to discover some of the more remarkable characteristics of e. He discovered its identity (approximately, of course) by studying a problem on compound interest.
Suppose we were to open an interest-bearing account in the amount of $1.00, which happened to pay the remarkable dividend of 100% interest per year. If the interest were compounded only annually, then the value of the account at the end of the year would be $2.00. If the interest were compounded twice annually, though, we would be credited $1.00 times , or $2.25, at the end of the year. If it were compounded quarterly, it would be worth about $2.44 at year’s end; compounding daily—that is, compounding 365 times—would put the value at $2.71 after 12 months.
Bernoulli noticed that, as the number of compoundings increased, the extra revenue produced by additional compoundings diminished. That is, while 4 compoundings would produce revenues $0.44 greater than a single compounding, a whopping 365 compoundings would add just $0.27 more revenue than what could be earned with only 4 compoundings.
Noting this trend, Bernoulli calculated that e is the value of the account if the interest is compounded an infinite number of times. The more compoundings, the closer the value of an account with a principal amount of $1.00 at an interest rate of 100% approaches approximately $2.7182818. . .you get the picture.
The following sequence illustrates this progression, and can be tested on a scientific calculator:
The larger n becomes, the closer the value of approaches e. E is the limit of this expression as n approaches infinity.
Another related and perhaps even more interesting way to define e is as the sum of the infinite series:
! is the factorial symbol. The factorial of a number is the product of all positive integers which are less than or equal to that number. because the product of no numbers at all is 1; 1 is the “empty product” in number theory. , , and , etc. The further we expand the series, the closer the sum of the reciprocals approaches e. If we stop at 10!, we get
We are already mindbendingly close to e by tossing in the towel at just , but we could continue far past , and thereby only slightly more accurately approximate its value.
Such is the magic of e, and why it is one of the most beautiful numbers we know.
Natural logarithms are defined as logarithms to the base e, and are most frequently represented by the expression . So, then, the equation effectively asks us: to what power must we raise e to obtain the product 1? The answer in this case happens to be 0, since any number to the 0 power is 1.
The following graph shows (blue) and (red), for comparison.
Note that, on the blue graph, where because ; on the red graph, where because . Both graphs pass through the point (1,0) because, since any number to the 0 power is 1, the logarithm of 1 is 0 in every base.
In rudimentary algebra, one immediately useful feature of logarithms is to help find the identity of a variable which is an exponent or part of an exponent by “undoing” the exponentiation.Consider the equation . How do we find x?Recall the property for logarithms we listed above which states that . This means that . We will use base-10 since it is convenient for computers and calculators; we could, in theory, use any base for a calculation such as this, including, of course e.
The rules of algebra dictate that what is done to one side of the equation must be done to the other; observing this, we get . We then divide through by , yielding . Now we can divide through again, this time by three, giving us the exact answer:
If we use a calculator to approximate the base-10 logarithmic values (log button) and perform the arithmetic, we see that . Plugging that value back into the original equation to check our work, we see that .
I love me some recursion, folks. Hopefully you’ll enjoy these fractal animations I’ve gleaned and compiled from YouTube. The music is, in my humble opinion, much more forgettable than the video in some instances—but your ears are not mine, so perhaps you’ll hear something I don’t. As for the graphics, though: get out your Mandelbrot Brand spectacles. Enjoy the turbulence while it lasts.
Far more than mere shimmering pretties, fractals are geometric patterns which have fine structure at arbitrarily small scales such that the structure is at least approximately recursive to the shape of the whole. Some examples of naturally occurring fractals include snowflakes, lightning bolts, tree branches, fern leaves [ed. okay, fern fronds]—in fact, fractals seem to be integral to the geometric expression of natural forms in any direction you happen to be looking (the link is to the Wikipedia article with excellent illustrations and explanations). The principle of recursion is fundamental to number theory and has been gaining the attention of mathematicians and cosmologists at least since the days of Leibniz, and increasingly so since the exploits of Benoit Mandelbrot and in this age of all things electro-graphical.
Over ten years ago, Mitchel Resnick and Brian Silverman of the Massachusetts Institute of Technology came up with an interesting software demonstration of the phenomenon of emergence. Emergence is loosely defined as the appearance of complex architecture or behavior that follows from simple rules, and is a cornerstone of most conceptions of biological evolution. Evolutionists believe that the preponderance of evidence suggests the sufficiency of emergence as a driver for adaptation, that no appeal to a ‘watchmaker’ of super-humanesque intelligence is necessary.
In their demonstration, Resnick and Silverman work with a black, two-dimensional plane composed of small squares which can ‘turn on’ (turn white) according to the action of a very few simple rules upon some initial state. In one example, ‘Seed’ rules are applied: a square turns off if it is on, and a square turns on if exactly two of its eight neighbors are on. One can begin with a very simple pattern, start the engine, and end up with a dizzying array of gliders, blinkers, and asymmetric noise. A slightly more complex set of rules, called ‘Life’ and invented by John Conway in 1970, produces more intricate and unstable patterns.
As the researchers point out, one very interesting aspect of these simulations is that the strange patterns and shapes created through these rudimentary interactions exist only in the mind of the observer—in reality, it’s just a bunch of intermingling black and white squares. The suggestion is that much of what we perceive as reality may be a secondary, artificial construct.
If your browser is Java-capable, you should be able to walk through the site in just a few minutes. Be forewarned, though: it’s hip to be square, and awfully addictive.