_{1}

^{*}

The hypothesis c = h = G = 1 implies that unit mass is not a single-valued function but rather has two widely varying values, such as 7.4 × 10^{-51} kg and 4.0 × 10^{35} kg. Hence, the considerable body of work in theoretical physics that uses this common convention must be deemed suspect. In order to avoid this problem, theoreticians must limit themselves to c = h = 1 or, exclusively, c = G = 1 depending upon whether they are chiefly concerned with atomic physics or with gravity, respectively.

The primitive dimensions of mechanics are space, mass and time. We can experience these dimensions intuitively through our senses. When we walk across a room we experience moving through an extent of physical space. When we do unaided manual labor we become familiar with mass or at least weight. When we celebrate an anniversary we acknowledge an interval of time lasting one year. But to make scientific measurements we need a measurement system.

Definition 1: A physical measurement system is an ordered 3-tupple containing three distinct literal words chosen from a natural language, such as English, that designate the unit of measurement for an interval of space, the unit of measurement for a quantity of mass, and the unit of measurement for an interval of time, in that order. Each element of a physical measurement system is independent of the other two.

The MKS system [meter, kilogram, second] is an example of a physical measurement system and the one that will be used in this paper. Note that the meter is not an intrinsic function of the kilogram or of the second, and the kilogram is not an intrinsic function of the second. Rather, each element of the MKS system was chosen by historical serendipity. For clarity, this paper will abbreviate kilogram as kg, and second as sec, but will not abbreviate meter.

There is a more historically recent type of measurement system.

Definition 2: A natural measurement system is an ordered 3-tuple [Ux, Um, Ut] of three distinct word variables that designate the unit of measurement for an interval of space, the unit of measurement for a quantity of mass, and the unit of measurement for an interval of time, in that order. The elements of a natural measurement system are not all independent of one another.

In theoretical physics it is a common practice to simplify calculations by using a natural measurement system to give one or more constants a magnitude of unity [

Begin with a simple but important observation.

Lemma: The ratio of two units of measurement for the same primitive dimension raised to the same power is a dimensionless real number.

Proof (by example): It is convenient to use an example that will be helpful in the sequel.

Now consider a simple and common natural unit by adopting the convention,

where c is the vacuum speed of light. Equation (2) means

Equation (3) has simple solutions since it follows from (2) and (3) that

It then follows by lemma that

where (meters·light-Ut^{−1}) and (sec·Ut^{−1}) are dimensionless but variable real numbers that depend upon the value for Ut in seconds.

Let k be a physical constant that must be expressed using all three primitive dimensions: space, mass and time. If we have c = 1, then it is obvious from (4)-(5) that in order to have c = k = 1, unit mass Um must be a single-valued function of unit time Ut. As a common example, suppose we have

where h is Planck’s constant,

Equations (7)-(8) mean that

It follows from (9) that

where by lemma (meters^{2}·light-Ut^{−2}) and (sec·Ut^{−1}) are dimensionless real numbers. Without loss of generalization, assume Ut = “second”. Then from (1), (10) becomes

But now suppose we want

where

is the gravitational constant. Then (12) becomes

such that

where by lemma, (meters^{3} light-Ut^{−3}) and (Ut^{2} sec^{−2}) are dimensionless real numbers. Again assume without loss of generalization that Ut = “second”. Then from (1) and 15,

and

Comparing (11) and (17), we see that the hypothesis

implies

Equation (19) is obviously false and by 86 orders of magnitude! Yet many theoreticians make use of (18) or natural units in the same form that substitute ℏ for h or 8πG for G [^{±1}, where r ≠ 1. Then (11) becomes

We could also multiply G by a literal real number u^{±1}, where u ≠ 1. Then (17) becomes

Hence, 19 becomes

and

Thus, literal numbers such as 1/2π or 8π or anything similar cannot begin to account for the magnitude of the error shown by (19).

It has been demonstrated that the excessive use of natural units due to a hypothesis in the form of h = c = G = 1 implies a grossly false assumption, such as 7.4 × 10^{−51} = 4.0 × 10^{35}. Since anything can be proven if we start with a false hypothesis, the considerable body of theoretical research that relies on this false assumption [

In order to avoid this problem, theoreticians must choose between c = h = 1 or, exclusively, c = G = 1 depending upon whether they are chiefly concerned with atomic physics or with gravity, respectively. As a reviewer was astute enough to point out, the hypothesis c = h = G = 1 has no physical meaning whatsoever. It is merely used to simplify calculations. But the constants h and G have very different physical meanings and to equate them is to wind up with a mathematical self-contradiction.