With a solid interpretation of logarithms under our belt, we are now in a position to look at the basic properties of the logarithm and understand what they are saying. The defining characteristic of logarithm functions is that they are real_valued functions such that
Property 1: Multiplying inputs adds outputs. for all . This says that whenever the input grows (or shrinks) by a factor of , the output goes up (or down) by only a fixed amount, which depends on . In fact, equation (1) alone tells us quite a bit about the behavior of and from it, we can almost guarantee that is a logarithm function. First, let's see how far we can get using equation (1) all by itself:
Property 2: 1 is mapped to 0.
This says that the amount the output changes if the input grows by a factor of 1 is zero — i.e., the output does not change if the input changes by a factor of 1. This is obvious, as "the input changed by a factor of 1" means "the input did not change."
Exercise: Prove (2) from (1).
Property 3: Reciprocating the input negates the output.
This says that the way that growing the input by a factor of changes the output is exactly the opposite from the way that shrinking the input by a factor of changes the output. In terms of the "communication cost" interpretation, if doubling (or tripling, or -times-ing) the possibility space increases costs by , then halving (or thirding, or -parts-ing) the space decreases costs by
Exercise: Prove (3) from (2) and (1).
Property 4: Dividing inputs subtracts outputs.
This follows immediately from (1) and (3).
Exercise: Give an interpretation of (4).
There are at least two good interpretations:
Try translating these into the communication cost interpretation if it is not clear why they're true.
Property 5: Exponentiating the input multiplies the output.
This says that multiplying the input by , times incurs identical changes to the output. In terms of the communication cost metaphor, this is saying that you can emulate an digit using different -digits.
Exercise: Prove (5).
This is easy to prove when
For this is a bit more difficult; we leave it as an exercise to the reader. Hint: Use the proof of (6) below, for to bootstrap up to the case where
For this is actually not provable from (1) alone; we need an additional assumption (such as continuity) on .
Property 5 is actually false, in full generality — it's possible to create a function that obeys (1), and obeys (5) for but which exhibits pathological behavior on irrational numbers. For more on this, see pathological near-logarithms.
This is the first place that property (1) fails us: 5 is true for but if we want to guarantee that it's true for we need to be continuous, i.e. we need to ensure that if follows 5 on the rationals it's not allowed to do anything insane on irrational numbers only.
Property 6: Rooting the input divides the output.
This says that, to change the output one th as much as you would if you multiplied the input by , multiply the input by the th root of . See Fractional digits for a physical interpretation of this fact.
Exercise: Prove (6).
As with (5), (6) is always true if but not necessarily always true if To prove (6) in full generality, we additionally require that be continuous.
Property 7: The function is either trivial, or sends some input to 1.
This says that either is very boring (and does nothing regardless of its inputs), or there is some particular factor such that when the input changes by a factor of , the output changes by exactly . In the communication cost interpretation, this says that if you're measuring communication costs, you've got to pick some unit (such as -digits) with which to measure.
Exercise: Prove (7).
Suppose does not send all inputs to , and let be an input that sends to some Then [1]
is We know that because whereas, by (2), .
Property 8: If the function is continuous, it is either trivial or a logarithm.
This property follows immediately from (5). Thus, (8) is always true if is a rational, and if is continuous then it's also true when is irrational.
Property (8) states that if is non-trivial, then it inverts exponentials with base In other words, counts the number of -factors in . In other words, counts how many times you need to multiply by to get . In other words, !
Many texts take (8) to be the defining characteristic of the logarithm. As we just demonstrated, one can also define logarithms by (1) as continuous non-trivial functions whose outputs grow by a constant (that depends on ) whenever their inputs grow by a factor of . All other properties of the logarithm follow from that.
If you want to remove the "continuous" qualifier, you're still fine as long as you stick to rational inputs. If you want to remove the "non-trivial" qualifier, you can interpret the function that sends everything to zero as . Allowing and restricting ourselves to rational inputs, every function that satisfies equation (1) is isomorphic to a logarithm function.
In other words, if you find a function whose output changes by a constant (that depends on ) whenever its input grows by a factor of , there is basically only one way it can behave. Furthermore, that function only has one degree of freedom — the choice of such that As we will see next, even that degree of freedom is rather paltry: All logarithm functions behave in essentially the same way. As such, if we find any such that (or any physical process well-modeled by such an ), then we immediately know quite a bit about how behaves.
You may be wondering, "what if is negative, or a fraction?" If so, see Strange roots. Short version: is perfectly well-defined.