Non-Dimensionless Products

Existence of a non-dimensionless product

Imagine the dimensional matrix

example of dimensional matrix with fundamental dimensions M, L, and T.
such that, the constants a, b, and c are not all zero. Under what conditions does there exist a product of the form
y = x1k1x2k1 ⋅ … ⋅ xnkn?

First condition

If y is not dimensionless, a product of the form
y = x1k1x2k1 ⋅ … ⋅ xnkn
exists if, and only if, the dimensional matrix of the variables x1, x2, …, xn has the same rank as the dimensional matrix of the variables y, x1, x2, …, xn.

Suppose, the exponents k1, k2, …, kn exists such that they are a solution of the linear system of equations
a1k1 + a2k2 + … + ankn = a
b1k1 + b2k2 + … + bnkn = b

g1k1 + g2k2 + … + gnkn = g
where a, b, …, g are the dimensional exponents.

For the dimensional matrix of coefficients of k's

dimensional matrix of coefficients
let its rank be r.

And, let R be the rank of the above dimensional matrix augmented by the column a, b, …, g

dimensional matrix of coefficients augmented with constants a to g.
But, theorem for consistency of a system of linear equations tells us that for the above system of equations to be consistent the ranks must be equal, r = R.

Because the system is consistent and ki's are a solution, the product y = x1k1x2k1 ⋅ … ⋅ xnkn must therefore exist.∎

 
Before one jumps into the topic of consistency and inconsistency one must familiarize with two other terminologies, system of linear equations and solution of the system.

Consider the equation

4x1x2 + 3x3 = −1.
One will notice that this is a linear equation in the variables x1, x2, and x3.

Take another equation which is given by

3x1 + x2 + 9x3 = −4.
This is also not only a linear equation but equation in the variables x1, x2, and x3.

Since both equations above are linear in the variables x1, x2, and x3, values of these variables that satisfy one equation must also satisfy the other equation. Therefore, they can be lumped together as

4x1x2 + 3x3 = −1
3x1 + x2 + 9x3 = −4
where, the values x1 = 1, x2 = 2, and x3 = −1 statisfy both equations.

Generalizing this, we define a system of linear equation as

A finite set of linear equations in the variables x1, x2, …, xn.
Whose solution is defined as
A sequence of numbers s1, s2, …, sn such that, x1 = s1, x2 = s2, …, xn = sn is a solution of every equation in the system.
Furthermore, there may exist another sequence of numbers t1, t2, …, tn which also solves every equation in the system. Therefore, a solution set is
The set of all the solutions of the equation.

Not all systems of linear equations have solutions

Consider the system given by
x1 + x2 = 4
2x1 + 2x2 = 6.    
Reducing the second equation by dividing by ½ we see that the above system is the same as
x1 + x2 = 4  
x1 + x2 = 3.
In other words, the equations in the system contradict each other and they do not have a solution.

We then define a system to be inconsistent as

A system of equations that has no solution.
And, consistent defined as
A system of equations that has at least one solution.

 
Recall that rank(ibid. 5.) is
The dimension of the row and column space of a matrix A is called the rank of A.
Furthermore
A system of linear equations Ax = b is consistent if and only if b is in the column space of A.
Let a system of linear equations be
Ax = b
such that it is equivalent to
[a11 a12 ... a1n; a21 a22 ... a2n; ... ; am1 am2 ... amn][x1; x2; ...; xn]^T = [b1; b2; ...; bm]^T
This can be further written as
[a11*x1 + a12*x2 + ... + a1n*xn; a21*x1 + a22*x2 + ... + a2n*xn; ... ; am1*x1 + am2*x2 + ... + amn*xn] = [b1; b2; ...; bm]^T
or
x1 [a11; a21; ... ; am1] + x2 [a12; a22; ... ; am2] + ... + xn [a1n; a2n; ... ; amn] = [b1; b2; ...; bm]^T
Therefore b is a linear combination of column vectors of matrix A.

From definition(ibid. 5.) we know that for the possibility of a linear combination of vectors at least one set of the scalars x1, x2, …, xn must exist. It follows that if x representing the scalars exist then b must be a linear combination of vectors. Above shows that this is a linear combination of column vectors of A.

Therefore, the system is consistent only if b is a linear combination of the column vectors and consequently b is in the column space.∎

 
A system of linear equations Ax = b is consistent if and only if the rank of its augmented matrix is equal to the rank of A.
Consider the consistent linear system Ax = b such that A is a m × n matrix, x a n × 1 vector and b a m × 1 column vector. [A b] is the augmented matrix.

From the lemma(ibid. 5)

Nonzero row vectors in a row-echelon form of a matrix A form a basis for the row space of A.
we know that the number of the resulting nonzero vectors gives us the dimension of the column or row space of A and consequently the rank of matrix A.

From the lemma

Ax = b is consistent if and only if b is in the column space of A
and since our system is consitent, the only unique column vector b between matrices A and [A b] must be in the column space of A. Therefore, the number of nonzero vectors following row operations on the A and [A b] will be equal.

Therefore, for a consistent system Ax = b, matrices A and [A b] will have equal ranks.∎

Second condition

If y = ƒ (x1, x2, …, xn) is a dimensionally homogeneous equation, and if y is not dimensionless, there exists a product of powers of the x's that has the same dimension as y.
This theorem depends on the theorem for the first condition (see proof).
Hypothesis: y = ƒ (x1, x2, …, xn) is dimensionally homogeneous and y = x1k1x2k1 ⋅ … ⋅ xnkn does not exist.

Consider the dimensional matrix of the variables y, x1, x2, …, xn involving three fundamental units of dimensions [M], [L], and [T] given below

example of dimensional matrix with fundamental dimensions M, L, and T.
where the constants a, b, and c are not all zero and rank of this dimensional matrix is R.

The hypothesis tells us that a product of the form y = x1k1x2k1 ⋅ … ⋅ xnkn does not exist. If r is the rank of the dimensional matrix of the independent variable x1, x2, …, xn and we already know that the rank of the dimensional matrix of the dependent and independent variables is R then, according to the theorem for the first condition

rR.
Consequently, the definition of rank of a matrix tells us that the rank of the dimensional matrix with the first column deleted must be less than R. Thus,
r < R.
Case: R = 3 Therefore, its determinant
determinant of matrix of order R = 3
is not equal to zero.

Using Cramer's rule for cofactor expansion

Cofactor C11 = b1c2 - b2c1
Cofactor C21 = a2c1 - a1c2
Cofactor C31 = a1b2 - a2b1
 
Δ = aC11 + bC21 + cC31 ≠ 0.
If the first column is deleted and replaced by any other column from the parent dimensional matrix
dimensional matrix of order R = 3 with first column deleted and replaced by any column from the parent dimensional matrix
For i = 1 or i = 2 the determinant is automatically equal to zero. For instance, for i = 1
dimensional matrix of order R = 3 replaced by the column vector [a1 b1 c1]
 
Δ = a1(b1c2b2c1) + b1(a2c1a1c2) + c1(a1b2a2b1)
= a1b1c2 a1b2c1 +a2b1c1 a1b1c2 +a1b2c1 a2b1c1
= 0.                                                                                           
And, for i = 2
dimensional matrix of order R = 3 replaced by the column vector [a2 b2 c2]
 
Δ = a2(b1c2b2c1) + b2(a2c1a1c2) + c2(a1b2a2b1)
= a2b1c2 a2b2c1 +a2b2c1 a1b2c2 +a1b2c2 a2b1c2
= 0.                                                                                           
But, for i ≥ 3 the determinant is not automatically equal to zero. For i = 3
dimensional matrix of order R = 3 replaced by the column vector [a3 b3 c3]
 
Δ = a3(b1c2b2c1) + b3(a2c1a1c2) + c3(a1b2a2b1)      
= a3b1c2a3b2c1 + a2b3c1a1b3c2 + a1b2c3a2b1c3
Based on the definition of rank of a matrix we know that for our R = 3
determinant of matrix of order R = 3 is not equal to zero
And, from the theorem for the first condition we know that r < R. Therefore,
rank of the dimensional matrix of order R = 3 replaced by the column vector [a3 b3 c3] must be less than R
hence,
determinant of matrix of order R = 3 replaced by the column vector [a3 b3 c3] must be equal to zero
Extending this we can state that for i ≥ 3 the determinants are all equal to zero.

Let,

C11α, C21β, C31γ
then our above results for determinants is summarized as
Δ = + + ≠ 0,                                         
aiα + biβ + ciγ = 0, for all i = 1, 2, …, n.

Since, our hypothesis claim that y = ƒ (x1, x2, …, xn) is dimensionally homogeneous then,

Kƒ (x1, x2, …, xn) = ƒ (K1x1, K2x2, …, Knxn)
where,
K = AaBbCc                                            
Ki = AaiBbiCci    for all i = 1, 2, …, n.

If G is some arbitrary positive constant such that

A = Gα, B = Gβ, and C = Gγ.
Substituting these into the above K's we get,
K = AaBbCc                              
= GαaGβbGγc                     
= G( + + )                       
Ki = G(aiα + biβ + ciγ)                          
= G0 = 1, for all i = 1, 2, …, n.
Because Ki = 1 and K = G( + + ) is an arbitrary constant
Kƒ (x1, x2, …, xn) = ƒ (K1x1, K2x2, …, Knxn)
becomes,
K y = ƒ (x1, x2, …, xn).
This means that any arbitrary positive constant multiplied by y will be returned by the operator ƒ working on the independent variables. In other words, for this equation there is no correspondence from the independent variables x1, x2, …, xn to the dependent variable y. That is, ƒ (x1, x2, …, xn) cannot be a function. This is contradictory to the hypothesis that y = ƒ (x1, x2, …, xn) is dimensionally homogeneous and hence by definition y is a function.

We can therefore state that for a rank of dimensional matrix of three if y = ƒ (x1, x2, …, xn) is dimensionally homogeneous, then there exist a dimensionless product.

Following the above steps one can prove the above statement for other ranks.∎

Next:

Buckingham's Theorem (p:8) ➽